期刊文献+
共找到3,290篇文章
< 1 2 165 >
每页显示 20 50 100
Deep Learning-Based Toolkit Inspection:Object Detection and Segmentation in Assembly Lines
1
作者 Arvind Mukundan Riya Karmakar +1 位作者 Devansh Gupta Hsiang-Chen Wang 《Computers, Materials & Continua》 2026年第1期1255-1277,共23页
Modern manufacturing processes have become more reliant on automation because of the accelerated transition from Industry 3.0 to Industry 4.0.Manual inspection of products on assembly lines remains inefficient,prone t... Modern manufacturing processes have become more reliant on automation because of the accelerated transition from Industry 3.0 to Industry 4.0.Manual inspection of products on assembly lines remains inefficient,prone to errors and lacks consistency,emphasizing the need for a reliable and automated inspection system.Leveraging both object detection and image segmentation approaches,this research proposes a vision-based solution for the detection of various kinds of tools in the toolkit using deep learning(DL)models.Two Intel RealSense D455f depth cameras were arranged in a top down configuration to capture both RGB and depth images of the toolkits.After applying multiple constraints and enhancing them through preprocessing and augmentation,a dataset consisting of 3300 annotated RGB-D photos was generated.Several DL models were selected through a comprehensive assessment of mean Average Precision(mAP),precision-recall equilibrium,inference latency(target≥30 FPS),and computational burden,resulting in a preference for YOLO and Region-based Convolutional Neural Networks(R-CNN)variants over ViT-based models due to the latter’s increased latency and resource requirements.YOLOV5,YOLOV8,YOLOV11,Faster R-CNN,and Mask R-CNN were trained on the annotated dataset and evaluated using key performance metrics(Recall,Accuracy,F1-score,and Precision).YOLOV11 demonstrated balanced excellence with 93.0%precision,89.9%recall,and a 90.6%F1-score in object detection,as well as 96.9%precision,95.3%recall,and a 96.5%F1-score in instance segmentation with an average inference time of 25 ms per frame(≈40 FPS),demonstrating real-time performance.Leveraging these results,a YOLOV11-based windows application was successfully deployed in a real-time assembly line environment,where it accurately processed live video streams to detect and segment tools within toolkits,demonstrating its practical effectiveness in industrial automation.The application is capable of precisely measuring socket dimensions by utilising edge detection techniques on YOLOv11 segmentation masks,in addition to detection and segmentation.This makes it possible to do specification-level quality control right on the assembly line,which improves the ability to examine things in real time.The implementation is a big step forward for intelligent manufacturing in the Industry 4.0 paradigm.It provides a scalable,efficient,and accurate way to do automated inspection and dimensional verification activities. 展开更多
关键词 Tool detection image segmentation object detection assembly line automation Industry 4.0 Intel RealSense deep learning toolkit verification RGB-D imaging quality assurance
在线阅读 下载PDF
A Deep Learning-Based Ocular Structure Segmentation for Assisted Myasthenia Gravis Diagnosis from Facial Images
2
作者 Linna Zhao Jianqiang Li +8 位作者 Xi Xu Chujie Zhu Wenxiu Cheng Suqin Liu Mingming Zhao Lei Zhang Jing Zhang Jian Yin Jijiang Yang 《Tsinghua Science and Technology》 2025年第6期2592-2605,共14页
Myasthenia Gravis(MG)is an autoimmune neuromuscular disease.Given that extraocular muscle manifestations are the initial and primary symptoms in most patients,ocular muscle assessment is regarded necessary early scree... Myasthenia Gravis(MG)is an autoimmune neuromuscular disease.Given that extraocular muscle manifestations are the initial and primary symptoms in most patients,ocular muscle assessment is regarded necessary early screening tool.To overcome the limitations of the manual clinical method,an intuitive idea is to collect data via imaging devices,followed by analysis or processing using Deep Learning(DL)techniques(particularly image segmentation approaches)to enable automatic MG evaluation.Unfortunately,their clinical applications in this field have not been thoroughly explored.To bridge this gap,our study prospectively establishes a new DL-based system to promote the diagnosis of MG disease,with a complete workflow including facial data acquisition,eye region localization,and ocular structure segmentation.Experimental results demonstrate that the proposed system achieves superior segmentation performance of ocular structure.Moreover,it markedly improves the diagnostic accuracy of doctors.In the future,this endeavor can offer highly promising MG monitoring tools for healthcare professionals,patients,and regions with limited medical resources. 展开更多
关键词 ocular structure segmentation deep Learning(DL) Myasthenia Gravis(MG)diagnosis facial images
原文传递
Deep Learning for Brain Tumor Segmentation and Classification: A Systematic Review of Methods and Trends
3
作者 Ameer Hamza Robertas Damaševicius 《Computers, Materials & Continua》 2026年第1期132-172,共41页
This systematic review aims to comprehensively examine and compare deep learning methods for brain tumor segmentation and classification using MRI and other imaging modalities,focusing on recent trends from 2022 to 20... This systematic review aims to comprehensively examine and compare deep learning methods for brain tumor segmentation and classification using MRI and other imaging modalities,focusing on recent trends from 2022 to 2025.The primary objective is to evaluate methodological advancements,model performance,dataset usage,and existing challenges in developing clinically robust AI systems.We included peer-reviewed journal articles and highimpact conference papers published between 2022 and 2025,written in English,that proposed or evaluated deep learning methods for brain tumor segmentation and/or classification.Excluded were non-open-access publications,books,and non-English articles.A structured search was conducted across Scopus,Google Scholar,Wiley,and Taylor&Francis,with the last search performed in August 2025.Risk of bias was not formally quantified but considered during full-text screening based on dataset diversity,validation methods,and availability of performance metrics.We used narrative synthesis and tabular benchmarking to compare performance metrics(e.g.,accuracy,Dice score)across model types(CNN,Transformer,Hybrid),imaging modalities,and datasets.A total of 49 studies were included(43 journal articles and 6 conference papers).These studies spanned over 9 public datasets(e.g.,BraTS,Figshare,REMBRANDT,MOLAB)and utilized a range of imaging modalities,predominantly MRI.Hybrid models,especially ResViT and UNetFormer,consistently achieved high performance,with classification accuracy exceeding 98%and segmentation Dice scores above 0.90 across multiple studies.Transformers and hybrid architectures showed increasing adoption post2023.Many studies lacked external validation and were evaluated only on a few benchmark datasets,raising concerns about generalizability and dataset bias.Few studies addressed clinical interpretability or uncertainty quantification.Despite promising results,particularly for hybrid deep learning models,widespread clinical adoption remains limited due to lack of validation,interpretability concerns,and real-world deployment barriers. 展开更多
关键词 Brain tumor segmentation brain tumor classification deep learning vision transformers hybrid models
在线阅读 下载PDF
A Survey on Deep Learning-based Fine-grained Object Classification and Semantic Segmentation 被引量:47
4
作者 Bo Zhao Jiashi Feng +1 位作者 Xiao Wu Shuicheng Yan 《International Journal of Automation and computing》 EI CSCD 2017年第2期119-135,共17页
The deep learning technology has shown impressive performance in various vision tasks such as image classification, object detection and semantic segmentation. In particular, recent advances of deep learning technique... The deep learning technology has shown impressive performance in various vision tasks such as image classification, object detection and semantic segmentation. In particular, recent advances of deep learning techniques bring encouraging performance to fine-grained image classification which aims to distinguish subordinate-level categories, such as bird species or dog breeds. This task is extremely challenging due to high intra-class and low inter-class variance. In this paper, we review four types of deep learning based fine-grained image classification approaches, including the general convolutional neural networks (CNNs), part detection based, ensemble of networks based and visual attention based fine-grained image classification approaches. Besides, the deep learning based semantic segmentation approaches are also covered in this paper. The region proposal based and fully convolutional networks based approaches for semantic segmentation are introduced respectively. 展开更多
关键词 deep learning fine-grained image classification semantic segmentation convolutional neural network (CNN) recurrentneural network (RNN)
原文传递
A deep learning-based method for segmentation and quantitative characterization of microstructures in weathering steel from sequential scanning electron microscope images 被引量:1
5
作者 Bing Han Wei-hao Wan +3 位作者 Dan-dan Sun Cai-chang Dong Lei Zhao Hai-zhou Wang 《Journal of Iron and Steel Research International》 SCIE EI CSCD 2022年第5期836-845,共10页
Microstructural classification is typically done manually by human experts,which gives rise to uncertainties due to subjectivity and reduces the overall efficiency.A high-throughput characterization is proposed based ... Microstructural classification is typically done manually by human experts,which gives rise to uncertainties due to subjectivity and reduces the overall efficiency.A high-throughput characterization is proposed based on deep learning,rapid acquisition technology,and mathematical statistics for the recognition,segmentation,and quantification of microstructure in weathering steel.The segmentation results showed that this method was accurate and efficient,and the segmentation of inclusions and pearlite phase achieved accuracy of 89.95%and 90.86%,respectively.The time required for batch processing by MIPAR software involving thresholding segmentation,morphological processing,and small area deletion was 1.05 s for a single image.By comparison,our system required only 0.102 s,which is ten times faster than the commercial software.The quantification results were extracted from large volumes of sequential image data(150 mm^(2),62,216 images,1024×1024 pixels),which ensure comprehensive statistics.Microstructure information,such as three-dimensional density distribution and the frequency of the minimum spatial distance of inclusions on the sample surface of 150 mm^(2),were quantified by extracting the coordinates and sizes of individual features.A refined characterization method for two-dimensional structures and spatial information that is unattainable when performing manually or with software is provided.That will be useful for understanding properties or behaviors of weathering steel,and reducing the resort to physical testing. 展开更多
关键词 deep learning HIGH-THROUGHPUT Microstructure Sequential image Rapid acquisition Quantitative characterization segmentation
原文传递
Deep Learning-Based 3D Instance and Semantic Segmentation: A Review 被引量:1
6
作者 Siddiqui Muhammad Yasir Hyunsik Ahn 《Journal on Artificial Intelligence》 2022年第2期99-114,共16页
The process of segmenting point cloud data into several homogeneous areas with points in the same region having the same attributes is known as 3D segmentation.Segmentation is challenging with point cloud data due to... The process of segmenting point cloud data into several homogeneous areas with points in the same region having the same attributes is known as 3D segmentation.Segmentation is challenging with point cloud data due to substantial redundancy,fluctuating sample density and lack of apparent organization.The research area has a wide range of robotics applications,including intelligent vehicles,autonomous mapping and navigation.A number of researchers have introduced various methodologies and algorithms.Deep learning has been successfully used to a spectrum of 2D vision domains as a prevailing A.I.methods.However,due to the specific problems of processing point clouds with deep neural networks,deep learning on point clouds is still in its initial stages.This study examines many strategies that have been presented to 3D instance and semantic segmentation and gives a complete assessment of current developments in deep learning-based 3D segmentation.In these approaches’benefits,draw backs,and design mechanisms are studied and addressed.This study evaluates the impact of various segmentation algorithms on competitiveness on various publicly accessible datasets,as well as the most often used pipelines,their advantages and limits,insightful findings and intriguing future research directions. 展开更多
关键词 Artificial intelligence computer vision robot vision 3D instance segmentation 3D semantic segmentation 3D data deep learning point cloud MESH VOXEL RGB-D segmentation
在线阅读 下载PDF
Intelligent Semantic Segmentation with Vision Transformers for Aerial Vehicle Monitoring
7
作者 Moneerah Alotaibi 《Computers, Materials & Continua》 2026年第1期1629-1648,共20页
Advanced traffic monitoring systems encounter substantial challenges in vehicle detection and classification due to the limitations of conventional methods,which often demand extensive computational resources and stru... Advanced traffic monitoring systems encounter substantial challenges in vehicle detection and classification due to the limitations of conventional methods,which often demand extensive computational resources and struggle with diverse data acquisition techniques.This research presents a novel approach for vehicle classification and recognition in aerial image sequences,integrating multiple advanced techniques to enhance detection accuracy.The proposed model begins with preprocessing using Multiscale Retinex(MSR)to enhance image quality,followed by Expectation-Maximization(EM)Segmentation for precise foreground object identification.Vehicle detection is performed using the state-of-the-art YOLOv10 framework,while feature extraction incorporates Maximally Stable Extremal Regions(MSER),Dense Scale-Invariant Feature Transform(Dense SIFT),and Zernike Moments Features to capture distinct object characteristics.Feature optimization is further refined through a Hybrid Swarm-based Optimization algorithm,ensuring optimal feature selection for improved classification performance.The final classification is conducted using a Vision Transformer,leveraging its robust learning capabilities for enhanced accuracy.Experimental evaluations on benchmark datasets,including UAVDT and the Unmanned Aerial Vehicle Intruder Dataset(UAVID),demonstrate the superiority of the proposed approach,achieving an accuracy of 94.40%on UAVDT and 93.57%on UAVID.The results highlight the efficacy of the model in significantly enhancing vehicle detection and classification in aerial imagery,outperforming existing methodologies and offering a statistically validated improvement for intelligent traffic monitoring systems compared to existing approaches. 展开更多
关键词 Machine learning semantic segmentation remote sensors deep learning object monitoring system
在线阅读 下载PDF
SwinHCAD: A Robust Multi-Modality Segmentation Model for Brain Tumors Using Transformer and Channel-Wise Attention
8
作者 Seyong Jin Muhammad Fayaz +2 位作者 L.Minh Dang Hyoung-Kyu Song Hyeonjoon Moon 《Computers, Materials & Continua》 2026年第1期511-533,共23页
Brain tumors require precise segmentation for diagnosis and treatment plans due to their complex morphology and heterogeneous characteristics.While MRI-based automatic brain tumor segmentation technology reduces the b... Brain tumors require precise segmentation for diagnosis and treatment plans due to their complex morphology and heterogeneous characteristics.While MRI-based automatic brain tumor segmentation technology reduces the burden on medical staff and provides quantitative information,existing methodologies and recent models still struggle to accurately capture and classify the fine boundaries and diverse morphologies of tumors.In order to address these challenges and maximize the performance of brain tumor segmentation,this research introduces a novel SwinUNETR-based model by integrating a new decoder block,the Hierarchical Channel-wise Attention Decoder(HCAD),into a powerful SwinUNETR encoder.The HCAD decoder block utilizes hierarchical features and channelspecific attention mechanisms to further fuse information at different scales transmitted from the encoder and preserve spatial details throughout the reconstruction phase.Rigorous evaluations on the recent BraTS GLI datasets demonstrate that the proposed SwinHCAD model achieved superior and improved segmentation accuracy on both the Dice score and HD95 metrics across all tumor subregions(WT,TC,and ET)compared to baseline models.In particular,the rationale and contribution of the model design were clarified through ablation studies to verify the effectiveness of the proposed HCAD decoder block.The results of this study are expected to greatly contribute to enhancing the efficiency of clinical diagnosis and treatment planning by increasing the precision of automated brain tumor segmentation. 展开更多
关键词 Attention mechanism brain tumor segmentation channel-wise attention decoder deep learning medical imaging MRI TRANSFORMER U-Net
在线阅读 下载PDF
UltraSegNet:A Hybrid Deep Learning Framework for Enhanced Breast Cancer Segmentation and Classification on Ultrasound Images
9
作者 Suhaila Abuowaida Hamza Abu Owida +3 位作者 Deema Mohammed Alsekait Nawaf Alshdaifat Diaa Salama Abd Elminaam Mohammad Alshinwan 《Computers, Materials & Continua》 2025年第5期3303-3333,共31页
Segmenting a breast ultrasound image is still challenging due to the presence of speckle noise,dependency on the operator,and the variation of image quality.This paper presents the UltraSegNet architecture that addres... Segmenting a breast ultrasound image is still challenging due to the presence of speckle noise,dependency on the operator,and the variation of image quality.This paper presents the UltraSegNet architecture that addresses these challenges through three key technical innovations:This work adds three things:(1)a changed ResNet-50 backbone with sequential 3×3 convolutions to keep fine anatomical details that are needed for finding lesion boundaries;(2)a computationally efficient regional attention mechanism that works on high-resolution features without using a transformer’s extra memory;and(3)an adaptive feature fusion strategy that changes local and global featuresbasedonhowthe image isbeing used.Extensive evaluation on two distinct datasets demonstrates UltraSegNet’s superior performance:On the BUSI dataset,it obtains a precision of 0.915,a recall of 0.908,and an F1 score of 0.911.In the UDAIT dataset,it achieves robust performance across the board,with a precision of 0.901 and recall of 0.894.Importantly,these improvements are achieved at clinically feasible computation times,taking 235 ms per image on standard GPU hardware.Notably,UltraSegNet does amazingly well on difficult small lesions(less than 10 mm),achieving a detection accuracy of 0.891.This is a huge improvement over traditional methods that have a hard time with small-scale features,as standard models can only achieve 0.63–0.71 accuracy.This improvement in small lesion detection is particularly crucial for early-stage breast cancer identification.Results from this work demonstrate that UltraSegNet can be practically deployable in clinical workflows to improve breast cancer screening accuracy. 展开更多
关键词 Breast cancer ultrasound image segmentation CLASSIFICATION deep learning
在线阅读 下载PDF
Performance vs.Complexity Comparative Analysis of Multimodal Bilinear Pooling Fusion Approaches for Deep Learning-Based Visual Arabic-Question Answering Systems
10
作者 Sarah M.Kamel Mai A.Fadel +1 位作者 Lamiaa Elrefaei Shimaa I.Hassan 《Computer Modeling in Engineering & Sciences》 2025年第4期373-411,共39页
Visual question answering(VQA)is a multimodal task,involving a deep understanding of the image scene and the question’s meaning and capturing the relevant correlations between both modalities to infer the appropriate... Visual question answering(VQA)is a multimodal task,involving a deep understanding of the image scene and the question’s meaning and capturing the relevant correlations between both modalities to infer the appropriate answer.In this paper,we propose a VQA system intended to answer yes/no questions about real-world images,in Arabic.To support a robust VQA system,we work in two directions:(1)Using deep neural networks to semantically represent the given image and question in a fine-grainedmanner,namely ResNet-152 and Gated Recurrent Units(GRU).(2)Studying the role of the utilizedmultimodal bilinear pooling fusion technique in the trade-o.between the model complexity and the overall model performance.Some fusion techniques could significantly increase the model complexity,which seriously limits their applicability for VQA models.So far,there is no evidence of how efficient these multimodal bilinear pooling fusion techniques are for VQA systems dedicated to yes/no questions.Hence,a comparative analysis is conducted between eight bilinear pooling fusion techniques,in terms of their ability to reduce themodel complexity and improve themodel performance in this case of VQA systems.Experiments indicate that these multimodal bilinear pooling fusion techniques have improved the VQA model’s performance,until reaching the best performance of 89.25%.Further,experiments have proven that the number of answers in the developed VQA system is a critical factor that a.ects the effectiveness of these multimodal bilinear pooling techniques in achieving their main objective of reducing the model complexity.The Multimodal Local Perception Bilinear Pooling(MLPB)technique has shown the best balance between the model complexity and its performance,for VQA systems designed to answer yes/no questions. 展开更多
关键词 Arabic-VQA deep learning-based VQA deep multimodal information fusion multimodal representation learning VQA of yes/no questions VQA model complexity VQA model performance performance-complexity trade-off
在线阅读 下载PDF
Assessing deep learning models for multi-class upper endoscopic disease segmentation:A comprehensive comparative study
11
作者 In Neng Chan Pak Kin Wong +13 位作者 Tao Yan Yan-Yan Hu Chon In Chan Ye-Ying Qin Chi Hong Wong In Weng Chan Ieng Hou Lam Sio Hou Wong Zheng Li Shan Gao Hon Ho Yu Liang Yao Bao-Liang Zhao Ying Hu 《World Journal of Gastroenterology》 2025年第41期121-150,共30页
BACKGROUND Upper gastrointestinal(UGI)diseases present diagnostic challenges during endoscopy due to visual similarities,indistinct boundaries,and observer variability,which can lead to missed diagnoses and delayed tr... BACKGROUND Upper gastrointestinal(UGI)diseases present diagnostic challenges during endoscopy due to visual similarities,indistinct boundaries,and observer variability,which can lead to missed diagnoses and delayed treatment.Automated segmentation using deep learning(DL)models offers the potential to assist endoscopists,improve diagnostic accuracy,and reduce workload.However,multi-class UGI disease segmentation remains underexplored,with limited annotated datasets and insufficient focus on clinical validation.This study hypothesizes that comparative analysis of different DL architectures can identify models suitable for clinical application,providing actionable insights to reduce diagnostic errors and support clinical decision-making in endoscopic practice.AIM To evaluate 17 state-of-the-art DL models for multi-class UGI disease segmentation,emphasizing clinical translation and real-world applicability.METHODS This study evaluated 17 DL models spanning convolutional neural network(CNN)-,transformer-,and mambabased architectures using a self-collected dataset from two hospitals in Macao and Xiangyang(3313 images,9 classes)and the public EDD2020 dataset(386 images,5 classes).Models were assessed for segmentation performance and performance-efficiency trade-off.Statistical analyses were conducted to examine performance differences across architectures.Generalization capability was measured through a cross-dataset evaluation(training models on the self-collected dataset and testing on the EDD2020 dataset).RESULTS Swin-UMamba achieved the highest segmentation performance across both datasets[intersection over union(IoU):89.06%±0.20%self-collected,77.53%±0.32%EDD2020],followed by SegFormer(IoU:88.94%±0.38%selfcollected,77.20%±0.98%EDD2020)and ConvNeXt+UPerNet(IoU:88.48%±0.09%self-collected,76.90%±0.61%EDD2020).Statistical analyses showed no significant differences between paradigms,though hierarchical architectures with pre-trained encoders consistently outperformed simpler designs.SegFormer achieved the best balance of accuracy and computational efficiency with a performance-efficiency trade-off score of 92.02%,making it suitable for real-time clinical use.Cross-dataset evaluation revealed significant performance drops,with generalization retention rates of 64.78%to 71.52%.Transformer-based models,particularly pyramid vision transformer v2+efficient multi-scale convolutional decoding(IoU:63.35%±1.44%),generalized better than CNN-and mambabased models.CONCLUSION Hierarchical architectures like Swin-UMamba and SegFormer show promise for UGI disease segmentation,reducing missed diagnoses and improving workflows,but robust clinical validation is crucial for real-world deployment. 展开更多
关键词 deep learning Upper endoscopy Medical imaging Gastrointestinal diseases Disease segmentation
在线阅读 下载PDF
Med-ReLU: A Parameter-Free Hybrid Activation Function for Deep Artificial Neural Network Used in Medical Image Segmentation
12
作者 Nawaf Waqas Muhammad Islam +3 位作者 Muhammad Yahya Shabana Habib Mohammed Aloraini Sheroz Khan 《Computers, Materials & Continua》 2025年第8期3029-3051,共23页
Deep learning(DL),derived from the domain of Artificial Neural Networks(ANN),forms one of the most essential components of modern deep learning algorithms.DL segmentation models rely on layer-by-layer convolution-base... Deep learning(DL),derived from the domain of Artificial Neural Networks(ANN),forms one of the most essential components of modern deep learning algorithms.DL segmentation models rely on layer-by-layer convolution-based feature representation,guided by forward and backward propagation.Acritical aspect of this process is the selection of an appropriate activation function(AF)to ensure robustmodel learning.However,existing activation functions often fail to effectively address the vanishing gradient problem or are complicated by the need for manual parameter tuning.Most current research on activation function design focuses on classification tasks using natural image datasets such asMNIST,CIFAR-10,and CIFAR-100.To address this gap,this study proposesMed-ReLU,a novel activation function specifically designed for medical image segmentation.Med-ReLU prevents deep learning models fromsuffering dead neurons or vanishing gradient issues.It is a hybrid activation function that combines the properties of ReLU and Softsign.For positive inputs,Med-ReLU adopts the linear behavior of ReLU to avoid vanishing gradients,while for negative inputs,it exhibits the Softsign’s polynomial convergence,ensuring robust training and avoiding inactive neurons across the training set.The training performance and segmentation accuracy ofMed-ReLU have been thoroughly evaluated,demonstrating stable learning behavior and resistance to overfitting.It consistently outperforms state-of-the-art activation functions inmedical image segmentation tasks.Designed as a parameter-free function,Med-ReLU is simple to implement in complex deep learning architectures,and its effectiveness spans various neural network models and anomaly detection scenarios. 展开更多
关键词 Medical image segmentation U-Net deep learning models activation function
暂未订购
A Novel Data-Annotated Label Collection and Deep-Learning Based Medical Image Segmentation in Reversible Data Hiding Domain
13
作者 Lord Amoah Jinwei Wang Bernard-Marie Onzo 《Computer Modeling in Engineering & Sciences》 2025年第5期1635-1660,共26页
Medical image segmentation,i.e.,labeling structures of interest in medical images,is crucial for disease diagnosis and treatment in radiology.In reversible data hiding in medical images(RDHMI),segmentation consists of... Medical image segmentation,i.e.,labeling structures of interest in medical images,is crucial for disease diagnosis and treatment in radiology.In reversible data hiding in medical images(RDHMI),segmentation consists of only two regions:the focal and nonfocal regions.The focal region mainly contains information for diagnosis,while the nonfocal region serves as the monochrome background.The current traditional segmentation methods utilized in RDHMI are inaccurate for complex medical images,and manual segmentation is time-consuming,poorly reproducible,and operator-dependent.Implementing state-of-the-art deep learning(DL)models will facilitate key benefits,but the lack of domain-specific labels for existing medical datasets makes it impossible.To address this problem,this study provides labels of existing medical datasets based on a hybrid segmentation approach to facilitate the implementation of DL segmentation models in this domain.First,an initial segmentation based on a 33 kernel is performed to analyze×identified contour pixels before classifying pixels into focal and nonfocal regions.Then,several human expert raters evaluate and classify the generated labels into accurate and inaccurate labels.The inaccurate labels undergo manual segmentation by medical practitioners and are scored based on a hierarchical voting scheme before being assigned to the proposed dataset.To ensure reliability and integrity in the proposed dataset,we evaluate the accurate automated labels with manually segmented labels by medical practitioners using five assessment metrics:dice coefficient,Jaccard index,precision,recall,and accuracy.The experimental results show labels in the proposed dataset are consistent with the subjective judgment of human experts,with an average accuracy score of 94%and dice coefficient scores between 90%-99%.The study further proposes a ResNet-UNet with concatenated spatial and channel squeeze and excitation(scSE)architecture for semantic segmentation to validate and illustrate the usefulness of the proposed dataset.The results demonstrate the superior performance of the proposed architecture in accurately separating the focal and nonfocal regions compared to state-of-the-art architectures.Dataset information is released under the following URL:https://www.kaggle.com/lordamoah/datasets(accessed on 31 March 2025). 展开更多
关键词 Reversible data hiding medical image segmentation medical image dataset deep learning
在线阅读 下载PDF
Remote sensing image semantic segmentation algorithm based on improved DeepLabv3+
14
作者 SONG Xirui GE Hongwei LI Ting 《Journal of Measurement Science and Instrumentation》 2025年第2期205-215,共11页
The convolutional neural network(CNN)method based on DeepLabv3+has some problems in the semantic segmentation task of high-resolution remote sensing images,such as fixed receiving field size of feature extraction,lack... The convolutional neural network(CNN)method based on DeepLabv3+has some problems in the semantic segmentation task of high-resolution remote sensing images,such as fixed receiving field size of feature extraction,lack of semantic information,high decoder magnification,and insufficient detail retention ability.A hierarchical feature fusion network(HFFNet)was proposed.Firstly,a combination of transformer and CNN architectures was employed for feature extraction from images of varying resolutions.The extracted features were processed independently.Subsequently,the features from the transformer and CNN were fused under the guidance of features from different sources.This fusion process assisted in restoring information more comprehensively during the decoding stage.Furthermore,a spatial channel attention module was designed in the final stage of decoding to refine features and reduce the semantic gap between shallow CNN features and deep decoder features.The experimental results showed that HFFNet had superior performance on UAVid,LoveDA,Potsdam,and Vaihingen datasets,and its cross-linking index was better than DeepLabv3+and other competing methods,showing strong generalization ability. 展开更多
关键词 semantic segmentation high-resolution remote sensing image deep learning transformer model attention mechanism feature fusion ENCODER DECODER
在线阅读 下载PDF
Deep Multi-Scale and Attention-Based Architectures for Semantic Segmentation in Biomedical Imaging
15
作者 Majid Harouni Vishakha Goyal +2 位作者 Gabrielle Feldman Sam Michael Ty C.Voss 《Computers, Materials & Continua》 2025年第10期331-366,共36页
Semantic segmentation plays a foundational role in biomedical image analysis, providing precise information about cellular, tissue, and organ structures in both biological and medical imaging modalities. Traditional a... Semantic segmentation plays a foundational role in biomedical image analysis, providing precise information about cellular, tissue, and organ structures in both biological and medical imaging modalities. Traditional approaches often fail in the face of challenges such as low contrast, morphological variability, and densely packed structures. Recent advancements in deep learning have transformed segmentation capabilities through the integration of fine-scale detail preservation, coarse-scale contextual modeling, and multi-scale feature fusion. This work provides a comprehensive analysis of state-of-the-art deep learning models, including U-Net variants, attention-based frameworks, and Transformer-integrated networks, highlighting innovations that improve accuracy, generalizability, and computational efficiency. Key architectural components such as convolution operations, shallow and deep blocks, skip connections, and hybrid encoders are examined for their roles in enhancing spatial representation and semantic consistency. We further discuss the importance of hierarchical and instance-aware segmentation and annotation in interpreting complex biological scenes and multiplexed medical images. By bridging methodological developments with diverse application domains, this paper outlines current trends and future directions for semantic segmentation, emphasizing its critical role in facilitating annotation, diagnosis, and discovery in biomedical research. 展开更多
关键词 Biomedical semantic segmentation multi-scale feature fusion fine-and coarse-scale features convolution operations shallow and deep blocks skip connections
在线阅读 下载PDF
Segmentation versus detection:Development and evaluation of deep learning models for prostate imaging reporting and data system lesions localisation on Bi-parametric prostate magnetic resonance imaging
16
作者 Zhe Min Fernando J.Bianco +6 位作者 Qianye Yang Wen Yan Ziyi Shen David Cohen Rachael Rodell Dean C.Barratt Yipeng Hu 《CAAI Transactions on Intelligence Technology》 2025年第3期689-702,共14页
Automated prostate cancer detection in magnetic resonance imaging(MRI)scans is of significant importance for cancer patient management.Most existing computer-aided diagnosis systems adopt segmentation methods while ob... Automated prostate cancer detection in magnetic resonance imaging(MRI)scans is of significant importance for cancer patient management.Most existing computer-aided diagnosis systems adopt segmentation methods while object detection approaches recently show promising results.The authors have(1)carefully compared performances of most-developed segmentation and object detection methods in localising prostate imaging reporting and data system(PIRADS)-labelled prostate lesions on MRI scans;(2)proposed an additional customised set of lesion-level localisation sensitivity and precision;(3)proposed efficient ways to ensemble the segmentation and object detection methods for improved performances.The ground-truth(GT)perspective lesion-level sensitivity and prediction-perspective lesion-level precision are reported,to quantify the ratios of true positive voxels being detected by algorithms over the number of voxels in the GT labelled regions and predicted regions.The two networks are trained independently on 549 clinical patients data with PIRADS-V2 as GT labels,and tested on 161 internal and 100 external MRI scans.At the lesion level,nnDetection outperforms nnUNet for detecting both PIRADS≥3 and PIRADS≥4 lesions in majority cases.For example,at the average false positive prediction per patient being 3,nnDetection achieves a greater Intersection-of-Union(IoU)-based sensitivity than nnUNet for detecting PIRADS≥3 lesions,being 80.78%�1.50%versus 60.40%�1.64%(p<0.01).At the voxel level,nnUnet is in general superior or comparable to nnDetection.The proposed ensemble methods achieve improved or comparable lesion-level accuracy,in all tested clinical scenarios.For example,at 3 false positives,the lesion-wise ensemble method achieves 82.24%�1.43%sensitivity versus 80.78%�1.50%(nnDetection)and 60.40%�1.64%(nnUNet)for detecting PIRADS≥3 lesions.Consistent conclusions are also drawn from results on the external data set. 展开更多
关键词 computer aided diagnosis deep learning magnetic resonance imaging(MRI) medical image segmentation medical object detection prostate cancer detection
暂未订购
Automatic Segmentation of Liver Tumor in CT Images with Deep Convolutional Neural Networks 被引量:19
17
作者 Wen Li Fucang Jia Qingmao Hu 《Journal of Computer and Communications》 2015年第11期146-151,共6页
Liver tumors segmentation from computed tomography (CT) images is an essential task for diagnosis and treatments of liver cancer. However, it is difficult owing to the variability of appearances, fuzzy boundaries, het... Liver tumors segmentation from computed tomography (CT) images is an essential task for diagnosis and treatments of liver cancer. However, it is difficult owing to the variability of appearances, fuzzy boundaries, heterogeneous densities, shapes and sizes of lesions. In this paper, an automatic method based on convolutional neural networks (CNNs) is presented to segment lesions from CT images. The CNNs is one of deep learning models with some convolutional filters which can learn hierarchical features from data. We compared the CNNs model to popular machine learning algorithms: AdaBoost, Random Forests (RF), and support vector machine (SVM). These classifiers were trained by handcrafted features containing mean, variance, and contextual features. Experimental evaluation was performed on 30 portal phase enhanced CT images using leave-one-out cross validation. The average Dice Similarity Coefficient (DSC), precision, and recall achieved of 80.06% ± 1.63%, 82.67% ± 1.43%, and 84.34% ± 1.61%, respectively. The results show that the CNNs method has better performance than other methods and is promising in liver tumor segmentation. 展开更多
关键词 LIVER TUMOR segmentation Convolutional NEURAL Networks deep Learning CT Image
在线阅读 下载PDF
Rethinking the Dice Loss for Deep Learning Lesion Segmentation in Medical Images 被引量:7
18
作者 ZHANG Yue LIU Shijie +1 位作者 LI Chunlai WANG Jianyu 《Journal of Shanghai Jiaotong university(Science)》 EI 2021年第1期93-102,共10页
Deep learning is widely used for lesion segmentation in medical images due to its breakthrough performance.Loss functions are critical in a deep learning pipeline,and they play important roles in segmenting performanc... Deep learning is widely used for lesion segmentation in medical images due to its breakthrough performance.Loss functions are critical in a deep learning pipeline,and they play important roles in segmenting performance.Dice loss is the most commonly used loss function in medical image segmentation,but it also has some disadvantages.In this paper,we discuss the advantages and disadvantages of the Dice loss function,and group the extensions of the Dice loss according to its improved purpose.The performances of some extensions are compared according to core references.Because different loss functions have different performances in different tasks,automatic loss function selection will be the potential direction in the future. 展开更多
关键词 Dice loss deep learning medical image lesion segmentation
原文传递
High-Precision Brain Tumor Segmentation using a Progressive Layered U-Net(PLU-Net)with Multi-Scale Data Augmentation and Attention Mechanisms on Multimodal Magnetic Resonance Imaging 被引量:1
19
作者 Noman Ahmed Siddiqui Muhammad Tahir Qadri +1 位作者 Muhammad Ovais Akhter Zain Anwar Ali 《Instrumentation》 2025年第1期77-92,共16页
Brain tumors present significant challenges in medical diagnosis and treatment,where early detection is crucial for reducing morbidity and mortality rates.This research introduces a novel deep learning model,the Progr... Brain tumors present significant challenges in medical diagnosis and treatment,where early detection is crucial for reducing morbidity and mortality rates.This research introduces a novel deep learning model,the Progressive Layered U-Net(PLU-Net),designed to improve brain tumor segmentation accuracy from Magnetic Resonance Imaging(MRI)scans.The PLU-Net extends the standard U-Net architecture by incorporating progressive layering,attention mechanisms,and multi-scale data augmentation.The progressive layering involves a cascaded structure that refines segmentation masks across multiple stages,allowing the model to capture features at different scales and resolutions.Attention gates within the convolutional layers selectively focus on relevant features while suppressing irrelevant ones,enhancing the model's ability to delineate tumor boundaries.Additionally,multi-scale data augmentation techniques increase the diversity of training data and boost the model's generalization capabilities.Evaluated on the BraTS 2021 dataset,the PLU-Net achieved state-of-the-art performance with a dice coefficient of 0.91,specificity of 0.92,sensitivity of 0.89,Hausdorff95 of 2.5,outperforming other modified U-Net architectures in segmentation accuracy.These results underscore the effectiveness of the PLU-Net in improving brain tumor segmentation from MRI scans,supporting clinicians in early diagnosis,treatment planning,and the development of new therapies. 展开更多
关键词 brain tumor segmentation MRI machine learning BraTS deep learning model PLU-Net
原文传递
Semantic Segmentation Using DeepLabv3+ Model for Fabric Defect Detection 被引量:4
20
作者 ZHU Runhu XIN Binjie +1 位作者 DENG Na FAN Mingzhu 《Wuhan University Journal of Natural Sciences》 CAS CSCD 2022年第6期539-549,共11页
Currently, numerous automatic fabric defect detection algorithms have been proposed. Traditional machine vision algorithms that set separate parameters for different textures and defects rely on the manual design of c... Currently, numerous automatic fabric defect detection algorithms have been proposed. Traditional machine vision algorithms that set separate parameters for different textures and defects rely on the manual design of corresponding features to complete the detection. To overcome the limitations of traditional algorithms, deep learning-based correlative algorithms can extract more complex image features and perform better in image classification and object detection. A pixel-level defect segmentation methodology using DeepLabv3+, a classical semantic segmentation network, is proposed in this paper. Based on ResNet-18,ResNet-50 and Mobilenetv2, three DeepLabv3+ networks are constructed, which are trained and tested from data sets produced by capturing or publicizing images. The experimental results show that the performance of three DeepLabv3+ networks is close to one another on the four indicators proposed(Precision, Recall, F1-score and Accuracy), proving them to achieve defect detection and semantic segmentation, which provide new ideas and technical support for fabric defect detection. 展开更多
关键词 fabric defect detection semantic segmentation deep learning deepLabv3+
原文传递
上一页 1 2 165 下一页 到第
使用帮助 返回顶部