期刊文献+
共找到63,400篇文章
< 1 2 250 >
每页显示 20 50 100
GLMCNet: A Global-Local Multiscale Context Network for High-Resolution Remote Sensing Image Semantic Segmentation
1
作者 Yanting Zhang Qiyue Liu +4 位作者 Chuanzhao Tian Xuewen Li Na Yang Feng Zhang Hongyue Zhang 《Computers, Materials & Continua》 2026年第1期2086-2110,共25页
High-resolution remote sensing images(HRSIs)are now an essential data source for gathering surface information due to advancements in remote sensing data capture technologies.However,their significant scale changes an... High-resolution remote sensing images(HRSIs)are now an essential data source for gathering surface information due to advancements in remote sensing data capture technologies.However,their significant scale changes and wealth of spatial details pose challenges for semantic segmentation.While convolutional neural networks(CNNs)excel at capturing local features,they are limited in modeling long-range dependencies.Conversely,transformers utilize multihead self-attention to integrate global context effectively,but this approach often incurs a high computational cost.This paper proposes a global-local multiscale context network(GLMCNet)to extract both global and local multiscale contextual information from HRSIs.A detail-enhanced filtering module(DEFM)is proposed at the end of the encoder to refine the encoder outputs further,thereby enhancing the key details extracted by the encoder and effectively suppressing redundant information.In addition,a global-local multiscale transformer block(GLMTB)is proposed in the decoding stage to enable the modeling of rich multiscale global and local information.We also design a stair fusion mechanism to transmit deep semantic information from deep to shallow layers progressively.Finally,we propose the semantic awareness enhancement module(SAEM),which further enhances the representation of multiscale semantic features through spatial attention and covariance channel attention.Extensive ablation analyses and comparative experiments were conducted to evaluate the performance of the proposed method.Specifically,our method achieved a mean Intersection over Union(mIoU)of 86.89%on the ISPRS Potsdam dataset and 84.34%on the ISPRS Vaihingen dataset,outperforming existing models such as ABCNet and BANet. 展开更多
关键词 Multiscale context attention mechanism remote sensing images semantic segmentation
在线阅读 下载PDF
Intelligent Semantic Segmentation with Vision Transformers for Aerial Vehicle Monitoring
2
作者 Moneerah Alotaibi 《Computers, Materials & Continua》 2026年第1期1629-1648,共20页
Advanced traffic monitoring systems encounter substantial challenges in vehicle detection and classification due to the limitations of conventional methods,which often demand extensive computational resources and stru... Advanced traffic monitoring systems encounter substantial challenges in vehicle detection and classification due to the limitations of conventional methods,which often demand extensive computational resources and struggle with diverse data acquisition techniques.This research presents a novel approach for vehicle classification and recognition in aerial image sequences,integrating multiple advanced techniques to enhance detection accuracy.The proposed model begins with preprocessing using Multiscale Retinex(MSR)to enhance image quality,followed by Expectation-Maximization(EM)Segmentation for precise foreground object identification.Vehicle detection is performed using the state-of-the-art YOLOv10 framework,while feature extraction incorporates Maximally Stable Extremal Regions(MSER),Dense Scale-Invariant Feature Transform(Dense SIFT),and Zernike Moments Features to capture distinct object characteristics.Feature optimization is further refined through a Hybrid Swarm-based Optimization algorithm,ensuring optimal feature selection for improved classification performance.The final classification is conducted using a Vision Transformer,leveraging its robust learning capabilities for enhanced accuracy.Experimental evaluations on benchmark datasets,including UAVDT and the Unmanned Aerial Vehicle Intruder Dataset(UAVID),demonstrate the superiority of the proposed approach,achieving an accuracy of 94.40%on UAVDT and 93.57%on UAVID.The results highlight the efficacy of the model in significantly enhancing vehicle detection and classification in aerial imagery,outperforming existing methodologies and offering a statistically validated improvement for intelligent traffic monitoring systems compared to existing approaches. 展开更多
关键词 Machine learning semantic segmentation remote sensors deep learning object monitoring system
在线阅读 下载PDF
Deep Learning for Brain Tumor Segmentation and Classification: A Systematic Review of Methods and Trends
3
作者 Ameer Hamza Robertas Damaševicius 《Computers, Materials & Continua》 2026年第1期132-172,共41页
This systematic review aims to comprehensively examine and compare deep learning methods for brain tumor segmentation and classification using MRI and other imaging modalities,focusing on recent trends from 2022 to 20... This systematic review aims to comprehensively examine and compare deep learning methods for brain tumor segmentation and classification using MRI and other imaging modalities,focusing on recent trends from 2022 to 2025.The primary objective is to evaluate methodological advancements,model performance,dataset usage,and existing challenges in developing clinically robust AI systems.We included peer-reviewed journal articles and highimpact conference papers published between 2022 and 2025,written in English,that proposed or evaluated deep learning methods for brain tumor segmentation and/or classification.Excluded were non-open-access publications,books,and non-English articles.A structured search was conducted across Scopus,Google Scholar,Wiley,and Taylor&Francis,with the last search performed in August 2025.Risk of bias was not formally quantified but considered during full-text screening based on dataset diversity,validation methods,and availability of performance metrics.We used narrative synthesis and tabular benchmarking to compare performance metrics(e.g.,accuracy,Dice score)across model types(CNN,Transformer,Hybrid),imaging modalities,and datasets.A total of 49 studies were included(43 journal articles and 6 conference papers).These studies spanned over 9 public datasets(e.g.,BraTS,Figshare,REMBRANDT,MOLAB)and utilized a range of imaging modalities,predominantly MRI.Hybrid models,especially ResViT and UNetFormer,consistently achieved high performance,with classification accuracy exceeding 98%and segmentation Dice scores above 0.90 across multiple studies.Transformers and hybrid architectures showed increasing adoption post2023.Many studies lacked external validation and were evaluated only on a few benchmark datasets,raising concerns about generalizability and dataset bias.Few studies addressed clinical interpretability or uncertainty quantification.Despite promising results,particularly for hybrid deep learning models,widespread clinical adoption remains limited due to lack of validation,interpretability concerns,and real-world deployment barriers. 展开更多
关键词 Brain tumor segmentation brain tumor classification deep learning vision transformers hybrid models
在线阅读 下载PDF
SwinHCAD: A Robust Multi-Modality Segmentation Model for Brain Tumors Using Transformer and Channel-Wise Attention
4
作者 Seyong Jin Muhammad Fayaz +2 位作者 L.Minh Dang Hyoung-Kyu Song Hyeonjoon Moon 《Computers, Materials & Continua》 2026年第1期511-533,共23页
Brain tumors require precise segmentation for diagnosis and treatment plans due to their complex morphology and heterogeneous characteristics.While MRI-based automatic brain tumor segmentation technology reduces the b... Brain tumors require precise segmentation for diagnosis and treatment plans due to their complex morphology and heterogeneous characteristics.While MRI-based automatic brain tumor segmentation technology reduces the burden on medical staff and provides quantitative information,existing methodologies and recent models still struggle to accurately capture and classify the fine boundaries and diverse morphologies of tumors.In order to address these challenges and maximize the performance of brain tumor segmentation,this research introduces a novel SwinUNETR-based model by integrating a new decoder block,the Hierarchical Channel-wise Attention Decoder(HCAD),into a powerful SwinUNETR encoder.The HCAD decoder block utilizes hierarchical features and channelspecific attention mechanisms to further fuse information at different scales transmitted from the encoder and preserve spatial details throughout the reconstruction phase.Rigorous evaluations on the recent BraTS GLI datasets demonstrate that the proposed SwinHCAD model achieved superior and improved segmentation accuracy on both the Dice score and HD95 metrics across all tumor subregions(WT,TC,and ET)compared to baseline models.In particular,the rationale and contribution of the model design were clarified through ablation studies to verify the effectiveness of the proposed HCAD decoder block.The results of this study are expected to greatly contribute to enhancing the efficiency of clinical diagnosis and treatment planning by increasing the precision of automated brain tumor segmentation. 展开更多
关键词 Attention mechanism brain tumor segmentation channel-wise attention decoder deep learning medical imaging MRI TRANSFORMER U-Net
在线阅读 下载PDF
Deep Learning-Based Toolkit Inspection: Object Detection and Segmentation in Assembly Lines
5
作者 Arvind Mukundan Riya Karmakar +1 位作者 Devansh Gupta Hsiang-Chen Wang 《Computers, Materials & Continua》 2026年第1期1255-1277,共23页
Modern manufacturing processes have become more reliant on automation because of the accelerated transition from Industry 3.0 to Industry 4.0.Manual inspection of products on assembly lines remains inefficient,prone t... Modern manufacturing processes have become more reliant on automation because of the accelerated transition from Industry 3.0 to Industry 4.0.Manual inspection of products on assembly lines remains inefficient,prone to errors and lacks consistency,emphasizing the need for a reliable and automated inspection system.Leveraging both object detection and image segmentation approaches,this research proposes a vision-based solution for the detection of various kinds of tools in the toolkit using deep learning(DL)models.Two Intel RealSense D455f depth cameras were arranged in a top down configuration to capture both RGB and depth images of the toolkits.After applying multiple constraints and enhancing them through preprocessing and augmentation,a dataset consisting of 3300 annotated RGB-D photos was generated.Several DL models were selected through a comprehensive assessment of mean Average Precision(mAP),precision-recall equilibrium,inference latency(target≥30 FPS),and computational burden,resulting in a preference for YOLO and Region-based Convolutional Neural Networks(R-CNN)variants over ViT-based models due to the latter’s increased latency and resource requirements.YOLOV5,YOLOV8,YOLOV11,Faster R-CNN,and Mask R-CNN were trained on the annotated dataset and evaluated using key performance metrics(Recall,Accuracy,F1-score,and Precision).YOLOV11 demonstrated balanced excellence with 93.0%precision,89.9%recall,and a 90.6%F1-score in object detection,as well as 96.9%precision,95.3%recall,and a 96.5%F1-score in instance segmentation with an average inference time of 25 ms per frame(≈40 FPS),demonstrating real-time performance.Leveraging these results,a YOLOV11-based windows application was successfully deployed in a real-time assembly line environment,where it accurately processed live video streams to detect and segment tools within toolkits,demonstrating its practical effectiveness in industrial automation.The application is capable of precisely measuring socket dimensions by utilising edge detection techniques on YOLOv11 segmentation masks,in addition to detection and segmentation.This makes it possible to do specification-level quality control right on the assembly line,which improves the ability to examine things in real time.The implementation is a big step forward for intelligent manufacturing in the Industry 4.0 paradigm.It provides a scalable,efficient,and accurate way to do automated inspection and dimensional verification activities. 展开更多
关键词 Tool detection image segmentation object detection assembly line automation Industry 4.0 Intel RealSense deep learning toolkit verification RGB-D imaging quality assurance
在线阅读 下载PDF
A Survey on Deep Learning-based Fine-grained Object Classification and Semantic Segmentation 被引量:47
6
作者 Bo Zhao Jiashi Feng +1 位作者 Xiao Wu Shuicheng Yan 《International Journal of Automation and computing》 EI CSCD 2017年第2期119-135,共17页
The deep learning technology has shown impressive performance in various vision tasks such as image classification, object detection and semantic segmentation. In particular, recent advances of deep learning technique... The deep learning technology has shown impressive performance in various vision tasks such as image classification, object detection and semantic segmentation. In particular, recent advances of deep learning techniques bring encouraging performance to fine-grained image classification which aims to distinguish subordinate-level categories, such as bird species or dog breeds. This task is extremely challenging due to high intra-class and low inter-class variance. In this paper, we review four types of deep learning based fine-grained image classification approaches, including the general convolutional neural networks (CNNs), part detection based, ensemble of networks based and visual attention based fine-grained image classification approaches. Besides, the deep learning based semantic segmentation approaches are also covered in this paper. The region proposal based and fully convolutional networks based approaches for semantic segmentation are introduced respectively. 展开更多
关键词 Deep learning fine-grained image classification semantic segmentation convolutional neural network (CNN) recurrentneural network (RNN)
原文传递
Multi-Scale Image Segmentation Model for Fine-Grained Recognition of Zanthoxylum Rust 被引量:1
7
作者 Fan Yang Jie Xu +5 位作者 Haoliang Wei Meng Ye Mingzhu Xu Qiuru Fu Lingfei Ren Zhengwen Huang 《Computers, Materials & Continua》 SCIE EI 2022年第5期2963-2980,共18页
Zanthoxylum bungeanum Maxim,generally called prickly ash,is widely grown in China.Zanthoxylum rust is the main disease affecting the growth and quality of Zanthoxylum.Traditional method for recognizing the degree of i... Zanthoxylum bungeanum Maxim,generally called prickly ash,is widely grown in China.Zanthoxylum rust is the main disease affecting the growth and quality of Zanthoxylum.Traditional method for recognizing the degree of infection of Zanthoxylum rust mainly rely on manual experience.Due to the complex colors and shapes of rust areas,the accuracy of manual recognition is low and difficult to be quantified.In recent years,the application of artificial intelligence technology in the agricultural field has gradually increased.In this paper,based on the DeepLabV2 model,we proposed a Zanthoxylum rust image segmentation model based on the FASPP module and enhanced features of rust areas.This paper constructed a fine-grained Zanthoxylum rust image dataset.In this dataset,the Zanthoxylum rust image was segmented and labeled according to leaves,spore piles,and brown lesions.The experimental results showed that the Zanthoxylum rust image segmentation method proposed in this paper was effective.The segmentation accuracy rates of leaves,spore piles and brown lesions reached 99.66%,85.16%and 82.47%respectively.MPA reached 91.80%,and MIoU reached 84.99%.At the same time,the proposed image segmentation model also had good efficiency,which can process 22 images per minute.This article provides an intelligent method for efficiently and accurately recognizing the degree of infection of Zanthoxylum rust. 展开更多
关键词 Zanthoxylum rust image segmentation deep learning
在线阅读 下载PDF
MG-SLAM: RGB-D SLAM Based on Semantic Segmentation for Dynamic Environment in the Internet of Vehicles 被引量:1
8
作者 Fengju Zhang Kai Zhu 《Computers, Materials & Continua》 2025年第2期2353-2372,共20页
The Internet of Vehicles (IoV) has become an important direction in the field of intelligent transportation, in which vehicle positioning is a crucial part. SLAM (Simultaneous Localization and Mapping) technology play... The Internet of Vehicles (IoV) has become an important direction in the field of intelligent transportation, in which vehicle positioning is a crucial part. SLAM (Simultaneous Localization and Mapping) technology plays a crucial role in vehicle localization and navigation. Traditional Simultaneous Localization and Mapping (SLAM) systems are designed for use in static environments, and they can result in poor performance in terms of accuracy and robustness when used in dynamic environments where objects are in constant movement. To address this issue, a new real-time visual SLAM system called MG-SLAM has been developed. Based on ORB-SLAM2, MG-SLAM incorporates a dynamic target detection process that enables the detection of both known and unknown moving objects. In this process, a separate semantic segmentation thread is required to segment dynamic target instances, and the Mask R-CNN algorithm is applied on the Graphics Processing Unit (GPU) to accelerate segmentation. To reduce computational cost, only key frames are segmented to identify known dynamic objects. Additionally, a multi-view geometry method is adopted to detect unknown moving objects. The results demonstrate that MG-SLAM achieves higher precision, with an improvement from 0.2730 m to 0.0135 m in precision. Moreover, the processing time required by MG-SLAM is significantly reduced compared to other dynamic scene SLAM algorithms, which illustrates its efficacy in locating objects in dynamic scenes. 展开更多
关键词 Visual SLAM dynamic scene semantic segmentation GPU acceleration key segmentation frame
在线阅读 下载PDF
MEET:A Million-Scale Dataset for Fine-Grained Geospatial Scene Classification With Zoom-Free Remote Sensing Imagery 被引量:1
9
作者 Yansheng Li Yuning Wu +9 位作者 Gong Cheng Chao Tao Bo Dang Yu Wang Jiahao Zhang Chuge Zhang Yiting Liu Xu Tang Jiayi Ma Yongjun Zhang 《IEEE/CAA Journal of Automatica Sinica》 2025年第5期1004-1023,共20页
Accurate fine-grained geospatial scene classification using remote sensing imagery is essential for a wide range of applications.However,existing approaches often rely on manually zooming remote sensing images at diff... Accurate fine-grained geospatial scene classification using remote sensing imagery is essential for a wide range of applications.However,existing approaches often rely on manually zooming remote sensing images at different scales to create typical scene samples.This approach fails to adequately support the fixed-resolution image interpretation requirements in real-world scenarios.To address this limitation,we introduce the million-scale fine-grained geospatial scene classification dataset(MEET),which contains over 1.03 million zoom-free remote sensing scene samples,manually annotated into 80 fine-grained categories.In MEET,each scene sample follows a scene-in-scene layout,where the central scene serves as the reference,and auxiliary scenes provide crucial spatial context for fine-grained classification.Moreover,to tackle the emerging challenge of scene-in-scene classification,we present the context-aware transformer(CAT),a model specifically designed for this task,which adaptively fuses spatial context to accurately classify the scene samples.CAT adaptively fuses spatial context to accurately classify the scene samples by learning attentional features that capture the relationships between the center and auxiliary scenes.Based on MEET,we establish a comprehensive benchmark for fine-grained geospatial scene classification,evaluating CAT against 11 competitive baselines.The results demonstrate that CAT significantly outperforms these baselines,achieving a 1.88%higher balanced accuracy(BA)with the Swin-Large backbone,and a notable 7.87%improvement with the Swin-Huge backbone.Further experiments validate the effectiveness of each module in CAT and show the practical applicability of CAT in the urban functional zone mapping.The source code and dataset will be publicly available at https://jerrywyn.github.io/project/MEET.html. 展开更多
关键词 fine-grained geospatial scene classification(FGSC) million-scale dataset remote sensing imagery(RSI) scene-in-scene transformer
在线阅读 下载PDF
New progresses of fine-grained sediment gravity-flow deposits and their importance for unconventional shale oil and gas plays 被引量:1
10
作者 Tian Yang Ying-Lin Liu 《Petroleum Science》 2025年第1期1-15,共15页
Fine-grained sediments are widely distributed and constitute the most abundant component in sedi-mentary systems,thus the research on their genesis and distribution is of great significance.In recent years,fine-graine... Fine-grained sediments are widely distributed and constitute the most abundant component in sedi-mentary systems,thus the research on their genesis and distribution is of great significance.In recent years,fine-grained sediment gravity-flows(FGSGF)have been recognized as an important transportation and depositional mechanism for accumulating thick successions of fine-grained sediments.Through a comprehensive review and synthesis of global research on FGSGF deposition,the characteristics,depositional mechanisms,and distribution patterns of fine-grained sediment gravity-flow deposits(FGSGFD)are discussed,and future research prospects are clarified.In addition to the traditionally recognized low-density turbidity current and muddy debris flow,wave-enhanced gravity flow,low-density muddy hyperpycnal flow,and hypopycnal plumes can all form widely distributed FGSGFD.At the same time,the evolution of FGSGF during transportation can result in transitional and hybrid gravity-flow deposits.The combination of multiple triggering mechanisms promotes the widespread develop-ment of FGSGFD,without temporal and spatial limitations.Different types and concentrations of clay minerals,organic matters,and organo-clay complexes are the keys to controlling the flow transformation of FGSGF from low-concentration turbidity currents to high-concentration muddy debris flows.Further study is needed on the interaction mechanism of FGSGF caused by different initiations,the evolution of FGSGF with the effect of organic-inorganic synergy,and the controlling factors of the distribution pat-terns of FGSGFD.The study of FGSGFD can shed some new light on the formation of widely developed thin-bedded siltstones within shales.At the same time,these insights may broaden the exploration scope of shale oil and gas,which have important geological significances for unconventional shale oil and gas. 展开更多
关键词 fine-grained sediment gravity-flow Depositional mechanism Transportation and evolution Distribution pattern Shale oil and gas
原文传递
Leci:Learnable Evolutionary Category Intermediates for Unsupervised Domain Adaptive Segmentation 被引量:1
11
作者 Qiming ZHANG Yufei XU +1 位作者 Jing ZHANG Dacheng TAO 《Artificial Intelligence Science and Engineering》 2025年第1期37-51,共15页
To avoid the laborious annotation process for dense prediction tasks like semantic segmentation,unsupervised domain adaptation(UDA)methods have been proposed to leverage the abundant annotations from a source domain,s... To avoid the laborious annotation process for dense prediction tasks like semantic segmentation,unsupervised domain adaptation(UDA)methods have been proposed to leverage the abundant annotations from a source domain,such as virtual world(e.g.,3D games),and adapt models to the target domain(the real world)by narrowing the domain discrepancies.However,because of the large domain gap,directly aligning two distinct domains without considering the intermediates leads to inefficient alignment and inferior adaptation.To address this issue,we propose a novel learnable evolutionary Category Intermediates(CIs)guided UDA model named Leci,which enables the information transfer between the two domains via two processes,i.e.,Distilling and Blending.Starting from a random initialization,the CIs learn shared category-wise semantics automatically from two domains in the Distilling process.Then,the learned semantics in the CIs are sent back to blend the domain features through a residual attentive fusion(RAF)module,such that the categorywise features of both domains shift towards each other.As the CIs progressively and consistently learn from the varying feature distributions during training,they are evolutionary to guide the model to achieve category-wise feature alignment.Experiments on both GTA5 and SYNTHIA datasets demonstrate Leci's superiority over prior representative methods. 展开更多
关键词 unsupervised domain adaptation semantic segmentation deep learning
在线阅读 下载PDF
BiCLIP-nnFormer:A Virtual Multimodal Instrument for Efficient and Accurate Medical Image Segmentation 被引量:1
12
作者 Wang Bo Yue Yan +5 位作者 Mengyuan Xu Yuqun Yang Xu Tang Kechen Shu Jingyang Ai Zheng You 《Instrumentation》 2025年第2期1-13,共13页
Image segmentation is attracting increasing attention in the field of medical image analysis.Since widespread utilization across various medical applications,ensuring and improving segmentation accuracy has become a c... Image segmentation is attracting increasing attention in the field of medical image analysis.Since widespread utilization across various medical applications,ensuring and improving segmentation accuracy has become a crucial topic of research.With advances in deep learning,researchers have developed numerous methods that combine Transformers and convolutional neural networks(CNNs)to create highly accurate models for medical image segmentation.However,efforts to further enhance accuracy by developing larger and more complex models or training with more extensive datasets,significantly increase computational resource consumption.To address this problem,we propose BiCLIP-nnFormer(the prefix"Bi"refers to the use of two distinct CLIP models),a virtual multimodal instrument that leverages CLIP models to enhance the segmentation performance of a medical segmentation model nnFormer.Since two CLIP models(PMC-CLIP and CoCa-CLIP)are pre-trained on large datasets,they do not require additional training,thus conserving computation resources.These models are used offline to extract image and text embeddings from medical images.These embeddings are then processed by the proposed 3D CLIP adapter,which adapts the CLIP knowledge for segmentation tasks by fine-tuning.Finally,the adapted embeddings are fused with feature maps extracted from the nnFormer encoder for generating predicted masks.This process enriches the representation capabilities of the feature maps by integrating global multimodal information,leading to more precise segmentation predictions.We demonstrate the superiority of BiCLIP-nnFormer and the effectiveness of using CLIP models to enhance nnFormer through experiments on two public datasets,namely the Synapse multi-organ segmentation dataset(Synapse)and the Automatic Cardiac Diagnosis Challenge dataset(ACDC),as well as a self-annotated lung multi-category segmentation dataset(LMCS). 展开更多
关键词 medical image analysis image segmentation CLIP feature fusion deep learning
原文传递
High-Precision Brain Tumor Segmentation using a Progressive Layered U-Net(PLU-Net)with Multi-Scale Data Augmentation and Attention Mechanisms on Multimodal Magnetic Resonance Imaging 被引量:1
13
作者 Noman Ahmed Siddiqui Muhammad Tahir Qadri +1 位作者 Muhammad Ovais Akhter Zain Anwar Ali 《Instrumentation》 2025年第1期77-92,共16页
Brain tumors present significant challenges in medical diagnosis and treatment,where early detection is crucial for reducing morbidity and mortality rates.This research introduces a novel deep learning model,the Progr... Brain tumors present significant challenges in medical diagnosis and treatment,where early detection is crucial for reducing morbidity and mortality rates.This research introduces a novel deep learning model,the Progressive Layered U-Net(PLU-Net),designed to improve brain tumor segmentation accuracy from Magnetic Resonance Imaging(MRI)scans.The PLU-Net extends the standard U-Net architecture by incorporating progressive layering,attention mechanisms,and multi-scale data augmentation.The progressive layering involves a cascaded structure that refines segmentation masks across multiple stages,allowing the model to capture features at different scales and resolutions.Attention gates within the convolutional layers selectively focus on relevant features while suppressing irrelevant ones,enhancing the model's ability to delineate tumor boundaries.Additionally,multi-scale data augmentation techniques increase the diversity of training data and boost the model's generalization capabilities.Evaluated on the BraTS 2021 dataset,the PLU-Net achieved state-of-the-art performance with a dice coefficient of 0.91,specificity of 0.92,sensitivity of 0.89,Hausdorff95 of 2.5,outperforming other modified U-Net architectures in segmentation accuracy.These results underscore the effectiveness of the PLU-Net in improving brain tumor segmentation from MRI scans,supporting clinicians in early diagnosis,treatment planning,and the development of new therapies. 展开更多
关键词 brain tumor segmentation MRI machine learning BraTS deep learning model PLU-Net
原文传递
Stochastic Augmented-Based Dual-Teaching for Semi-Supervised Medical Image Segmentation
14
作者 Hengyang Liu Yang Yuan +2 位作者 Pengcheng Ren Chengyun Song Fen Luo 《Computers, Materials & Continua》 SCIE EI 2025年第1期543-560,共18页
Existing semi-supervisedmedical image segmentation algorithms use copy-paste data augmentation to correct the labeled-unlabeled data distribution mismatch.However,current copy-paste methods have three limitations:(1)t... Existing semi-supervisedmedical image segmentation algorithms use copy-paste data augmentation to correct the labeled-unlabeled data distribution mismatch.However,current copy-paste methods have three limitations:(1)training the model solely with copy-paste mixed pictures from labeled and unlabeled input loses a lot of labeled information;(2)low-quality pseudo-labels can cause confirmation bias in pseudo-supervised learning on unlabeled data;(3)the segmentation performance in low-contrast and local regions is less than optimal.We design a Stochastic Augmentation-Based Dual-Teaching Auxiliary Training Strategy(SADT),which enhances feature diversity and learns high-quality features to overcome these problems.To be more precise,SADT trains the Student Network by using pseudo-label-based training from Teacher Network 1 and supervised learning with labeled data,which prevents the loss of rare labeled data.We introduce a bi-directional copy-pastemask with progressive high-entropy filtering to reduce data distribution disparities and mitigate confirmation bias in pseudo-supervision.For the mixed images,Deep-Shallow Spatial Contrastive Learning(DSSCL)is proposed in the feature spaces of Teacher Network 2 and the Student Network to improve the segmentation capabilities in low-contrast and local areas.In this procedure,the features retrieved by the Student Network are subjected to a random feature perturbation technique.On two openly available datasets,extensive trials show that our proposed SADT performs much better than the state-ofthe-art semi-supervised medical segmentation techniques.Using only 10%of the labeled data for training,SADT was able to acquire a Dice score of 90.10%on the ACDC(Automatic Cardiac Diagnosis Challenge)dataset. 展开更多
关键词 SEMI-SUPERVISED medical image segmentation contrastive learning stochastic augmented
在线阅读 下载PDF
Text-Image Feature Fine-Grained Learning for Joint Multimodal Aspect-Based Sentiment Analysis
15
作者 Tianzhi Zhang Gang Zhou +4 位作者 Shuang Zhang Shunhang Li Yepeng Sun Qiankun Pi Shuo Liu 《Computers, Materials & Continua》 SCIE EI 2025年第1期279-305,共27页
Joint Multimodal Aspect-based Sentiment Analysis(JMASA)is a significant task in the research of multimodal fine-grained sentiment analysis,which combines two subtasks:Multimodal Aspect Term Extraction(MATE)and Multimo... Joint Multimodal Aspect-based Sentiment Analysis(JMASA)is a significant task in the research of multimodal fine-grained sentiment analysis,which combines two subtasks:Multimodal Aspect Term Extraction(MATE)and Multimodal Aspect-oriented Sentiment Classification(MASC).Currently,most existing models for JMASA only perform text and image feature encoding from a basic level,but often neglect the in-depth analysis of unimodal intrinsic features,which may lead to the low accuracy of aspect term extraction and the poor ability of sentiment prediction due to the insufficient learning of intra-modal features.Given this problem,we propose a Text-Image Feature Fine-grained Learning(TIFFL)model for JMASA.First,we construct an enhanced adjacency matrix of word dependencies and adopt graph convolutional network to learn the syntactic structure features for text,which addresses the context interference problem of identifying different aspect terms.Then,the adjective-noun pairs extracted from image are introduced to enable the semantic representation of visual features more intuitive,which addresses the ambiguous semantic extraction problem during image feature learning.Thereby,the model performance of aspect term extraction and sentiment polarity prediction can be further optimized and enhanced.Experiments on two Twitter benchmark datasets demonstrate that TIFFL achieves competitive results for JMASA,MATE and MASC,thus validating the effectiveness of our proposed methods. 展开更多
关键词 Multimodal sentiment analysis aspect-based sentiment analysis feature fine-grained learning graph convolutional network adjective-noun pairs
在线阅读 下载PDF
EILnet: An intelligent model for the segmentation of multiple fracture types in karst carbonate reservoirs using electrical image logs 被引量:1
16
作者 Zhuolin Li Guoyin Zhang +4 位作者 Xiangbo Zhang Xin Zhang Yuchen Long Yanan Sun Chengyan Lin 《Natural Gas Industry B》 2025年第2期158-173,共16页
Karst fractures serve as crucial seepage channels and storage spaces for carbonate natural gas reservoirs,and electrical image logs are vital data for visualizing and characterizing such fractures.However,the conventi... Karst fractures serve as crucial seepage channels and storage spaces for carbonate natural gas reservoirs,and electrical image logs are vital data for visualizing and characterizing such fractures.However,the conventional approach of identifying fractures using electrical image logs predominantly relies on manual processes that are not only time-consuming but also highly subjective.In addition,the heterogeneity and strong dissolution tendency of karst carbonate reservoirs lead to complexity and variety in fracture geometry,which makes it difficult to accurately identify fractures.In this paper,the electrical image logs network(EILnet)da deep-learning-based intelligent semantic segmentation model with a selective attention mechanism and selective feature fusion moduledwas created to enable the intelligent identification and segmentation of different types of fractures through electrical logging images.Data from electrical image logs representing structural and induced fractures were first selected using the sliding window technique before image inpainting and data augmentation were implemented for these images to improve the generalizability of the model.Various image-processing tools,including the bilateral filter,Laplace operator,and Gaussian low-pass filter,were also applied to the electrical logging images to generate a multi-attribute dataset to help the model learn the semantic features of the fractures.The results demonstrated that the EILnet model outperforms mainstream deep-learning semantic segmentation models,such as Fully Convolutional Networks(FCN-8s),U-Net,and SegNet,for both the single-channel dataset and the multi-attribute dataset.The EILnet provided significant advantages for the single-channel dataset,and its mean intersection over union(MIoU)and pixel accuracy(PA)were 81.32%and 89.37%,respectively.In the case of the multi-attribute dataset,the identification capability of all models improved to varying degrees,with the EILnet achieving the highest MIoU and PA of 83.43%and 91.11%,respectively.Further,applying the EILnet model to various blind wells demonstrated its ability to provide reliable fracture identification,thereby indicating its promising potential applications. 展开更多
关键词 Karst fracture identification Deep learning Semantic segmentation Electrical image logs Image processing
在线阅读 下载PDF
M2ANet:Multi-branch and multi-scale attention network for medical image segmentation 被引量:1
17
作者 Wei Xue Chuanghui Chen +3 位作者 Xuan Qi Jian Qin Zhen Tang Yongsheng He 《Chinese Physics B》 2025年第8期547-559,共13页
Convolutional neural networks(CNNs)-based medical image segmentation technologies have been widely used in medical image segmentation because of their strong representation and generalization abilities.However,due to ... Convolutional neural networks(CNNs)-based medical image segmentation technologies have been widely used in medical image segmentation because of their strong representation and generalization abilities.However,due to the inability to effectively capture global information from images,CNNs can easily lead to loss of contours and textures in segmentation results.Notice that the transformer model can effectively capture the properties of long-range dependencies in the image,and furthermore,combining the CNN and the transformer can effectively extract local details and global contextual features of the image.Motivated by this,we propose a multi-branch and multi-scale attention network(M2ANet)for medical image segmentation,whose architecture consists of three components.Specifically,in the first component,we construct an adaptive multi-branch patch module for parallel extraction of image features to reduce information loss caused by downsampling.In the second component,we apply residual block to the well-known convolutional block attention module to enhance the network’s ability to recognize important features of images and alleviate the phenomenon of gradient vanishing.In the third component,we design a multi-scale feature fusion module,in which we adopt adaptive average pooling and position encoding to enhance contextual features,and then multi-head attention is introduced to further enrich feature representation.Finally,we validate the effectiveness and feasibility of the proposed M2ANet method through comparative experiments on four benchmark medical image segmentation datasets,particularly in the context of preserving contours and textures. 展开更多
关键词 medical image segmentation convolutional neural network multi-branch attention multi-scale feature fusion
原文传递
3D medical image segmentation using the serial-parallel convolutional neural network and transformer based on crosswindow self-attention 被引量:1
18
作者 Bin Yu Quan Zhou +3 位作者 Li Yuan Huageng Liang Pavel Shcherbakov Xuming Zhang 《CAAI Transactions on Intelligence Technology》 2025年第2期337-348,共12页
Convolutional neural network(CNN)with the encoder-decoder structure is popular in medical image segmentation due to its excellent local feature extraction ability but it faces limitations in capturing the global featu... Convolutional neural network(CNN)with the encoder-decoder structure is popular in medical image segmentation due to its excellent local feature extraction ability but it faces limitations in capturing the global feature.The transformer can extract the global information well but adapting it to small medical datasets is challenging and its computational complexity can be heavy.In this work,a serial and parallel network is proposed for the accurate 3D medical image segmentation by combining CNN and transformer and promoting feature interactions across various semantic levels.The core components of the proposed method include the cross window self-attention based transformer(CWST)and multi-scale local enhanced(MLE)modules.The CWST module enhances the global context understanding by partitioning 3D images into non-overlapping windows and calculating sparse global attention between windows.The MLE module selectively fuses features by computing the voxel attention between different branch features,and uses convolution to strengthen the dense local information.The experiments on the prostate,atrium,and pancreas MR/CT image datasets consistently demonstrate the advantage of the proposed method over six popular segmentation models in both qualitative evaluation and quantitative indexes such as dice similarity coefficient,Intersection over Union,95%Hausdorff distance and average symmetric surface distance. 展开更多
关键词 convolution neural network cross window self‐attention medical image segmentation transformer
在线阅读 下载PDF
Dual networks with hierarchical attention for fine-grained image classification
19
作者 YANG Tao WANG Gaihua 《中国科学院大学学报(中英文)》 北大核心 2025年第6期806-813,共8页
In this paper,we propose hierarchical attention dual network(DNet)for fine-grained image classification.The DNet can randomly select pairs of inputs from the dataset and compare the differences between them through hi... In this paper,we propose hierarchical attention dual network(DNet)for fine-grained image classification.The DNet can randomly select pairs of inputs from the dataset and compare the differences between them through hierarchical attention feature learning,which are used simultaneously to remove noise and retain salient features.In the loss function,it considers the losses of difference in paired images according to the intra-variance and inter-variance.In addition,we also collect the disaster scene dataset from remote sensing images and apply the proposed method to disaster scene classification,which contains complex scenes and multiple types of disasters.Compared to other methods,experimental results show that the DNet with hierarchical attention is robust to different datasets and performs better. 展开更多
关键词 dual network(DNet) fine-grained image classification hierarchical attention features
在线阅读 下载PDF
Mechanisms of fine-grained sedimentation and reservoir characteristics of shale oil in continental freshwater lacustrine basin:A case study from Chang 7_(3) sub-member of Triassic Yanchang Formation in southwestern Ordos Basin,NW China 被引量:1
20
作者 LIU Xianyang LIU Jiangyan +6 位作者 WANG Xiujuan GUO Qiheng Lv Qiqi YANG Zhi ZHANG Yan ZHANG Zhongyi ZHANG Wenxuan 《Petroleum Exploration and Development》 2025年第1期95-111,共17页
Based on recent advancements in shale oil exploration within the Ordos Basin,this study presents a comprehensive investigation of the paleoenvironment,lithofacies assemblages and distribution,depositional mechanisms,a... Based on recent advancements in shale oil exploration within the Ordos Basin,this study presents a comprehensive investigation of the paleoenvironment,lithofacies assemblages and distribution,depositional mechanisms,and reservoir characteristics of shale oil of fine-grained sediment deposition in continental freshwater lacustrine basins,with a focus on the Chang 7_(3) sub-member of Triassic Yanchang Formation.The research integrates a variety of exploration data,including field outcrops,drilling,logging,core samples,geochemical analyses,and flume simulation.The study indicates that:(1)The paleoenvironment of the Chang 7_(3) deposition is characterized by a warm and humid climate,frequent monsoon events,and a large water depth of freshwater lacustrine basin.The paleogeomorphology exhibits an asymmetrical pattern,with steep slopes in the southwest and gentle slopes in the northeast,which can be subdivided into microgeomorphological units,including depressions and ridges in lakebed,as well as ancient channels.(2)The Chang 7_(3) sub-member is characterized by a diverse array of fine-grained sediments,including very fine sandstone,siltstone,mudstone and tuff.These sediments are primarily distributed in thin interbedded and laminated arrangements vertically.The overall grain size of the sandstone predominantly falls below 62.5μm,with individual layer thicknesses of 0.05–0.64 m.The deposits contain intact plant fragments and display various sedimentary structure,such as wavy bedding,inverse-to-normal grading sequence,and climbing ripple bedding,which indicating a depositional origin associated with density flows.(3)Flume simulation experiments have successfully replicated the transport processes and sedimentary characteristics associated with density flows.The initial phase is characterized by a density-velocity differential,resulting in a thicker,coarser sediment layer at the flow front,while the upper layers are thinner and finer in grain size.During the mid-phase,sliding water effects cause the fluid front to rise and facilitate rapid forward transport.This process generates multiple“new fronts”,enabling the long-distance transport of fine-grained sandstones,such as siltstone and argillaceous siltstone,into the center of the lake basin.(4)A sedimentary model primarily controlled by hyperpynal flows was established for the southwestern part of the basin,highlighting that the frequent occurrence of flood events and the steep slope topography in this area are primary controlling factors for the development of hyperpynal flows.(5)Sandstone and mudstone in the Chang 7_(3) sub-member exhibit micro-and nano-scale pore-throat systems,shale oil is present in various lithologies,while the content of movable oil varies considerably,with sandstone exhibiting the highest content of movable oil.(6)The fine-grained sediment complexes formed by multiple episodes of sandstones and mudstones associated with density flow in the Chang 7_(3) formation exhibit characteristics of“overall oil-bearing with differential storage capacity”.The combination of mudstone with low total organic carbon content(TOC)and siltstone is identified as the most favorable exploration target at present. 展开更多
关键词 fine-grained sedimentation density flow mode flume simulation experiments reservoir characteristics Chang 7_(3)sub-member Triassic Yanchang Formation shale oil Ordos Basin
在线阅读 下载PDF
上一页 1 2 250 下一页 到第
使用帮助 返回顶部