期刊文献+
共找到515篇文章
< 1 2 26 >
每页显示 20 50 100
Precision organoid segmentation technique(POST):accurate organoid segmentation in challenging bright-field images 被引量:1
1
作者 Xuan Du Yuchen Li +5 位作者 Jiaping Song Zilin Zhang Jing Zhang Yanhui Li Zaozao Chen Zhongze Gu 《Bio-Design and Manufacturing》 2026年第1期80-93,I0013-I0016,共18页
Organoids possess immense potential for unraveling the intricate functions of human tissues and facilitating preclinical disease treatment.Their applications span from high-throughput drug screening to the modeling of... Organoids possess immense potential for unraveling the intricate functions of human tissues and facilitating preclinical disease treatment.Their applications span from high-throughput drug screening to the modeling of complex diseases,with some even achieving clinical translation.Changes in the overall size,shape,boundary,and other morphological features of organoids provide a noninvasive method for assessing organoid drug sensitivity.However,the precise segmentation of organoids in bright-field microscopy images is made difficult by the complexity of the organoid morphology and interference,including overlapping organoids,bubbles,dust particles,and cell fragments.This paper introduces the precision organoid segmentation technique(POST),which is a deep-learning algorithm for segmenting challenging organoids under simple bright-field imaging conditions.Unlike existing methods,POST accurately segments each organoid and eliminates various artifacts encountered during organoid culturing and imaging.Furthermore,it is sensitive to and aligns with measurements of organoid activity in drug sensitivity experiments.POST is expected to be a valuable tool for drug screening using organoids owing to its capability of automatically and rapidly eliminating interfering substances and thereby streamlining the organoid analysis and drug screening process. 展开更多
关键词 Organoid Drug screening Deep learning image segmentation
暂未订购
Advances in deep learning for bacterial image segmentation in optical microscopy
2
作者 Zhijun Tan Yang Ding +6 位作者 Huibin Ma Jintao Li Danrou Zheng Hua Bai Weini Xin Lin Li Bo Peng 《Journal of Innovative Optical Health Sciences》 2026年第1期30-44,共15页
Microscopy imaging is fundamental in analyzing bacterial morphology and dynamics,offering critical insights into bacterial physiology and pathogenicity.Image segmentation techniques enable quantitative analysis of bac... Microscopy imaging is fundamental in analyzing bacterial morphology and dynamics,offering critical insights into bacterial physiology and pathogenicity.Image segmentation techniques enable quantitative analysis of bacterial structures,facilitating precise measurement of morphological variations and population behaviors at single-cell resolution.This paper reviews advancements in bacterial image segmentation,emphasizing the shift from traditional thresholding and watershed methods to deep learning-driven approaches.Convolutional neural networks(CNNs),U-Net architectures,and three-dimensional(3D)frameworks excel at segmenting dense biofilms and resolving antibiotic-induced morphological changes.These methods combine automated feature extraction with physics-informed postprocessing.Despite progress,challenges persist in computational efficiency,cross-species generalizability,and integration with multimodal experimental workflows.Future progress will depend on improving model robustness across species and imaging modalities,integrating multimodal data for phenotype-function mapping,and developing standard pipelines that link computational tools with clinical diagnostics.These innovations will expand microbial phenotyping beyond structural analysis,enabling deeper insights into bacterial physiology and ecological interactions. 展开更多
关键词 Bacterial image deep learning optical microscopy image segmentation artificial intelligence
原文传递
RE-UKAN:A Medical Image Segmentation Network Based on Residual Network and Efficient Local Attention
3
作者 Bo Li Jie Jia +2 位作者 Peiwen Tan Xinyan Chen Dongjin Li 《Computers, Materials & Continua》 2026年第3期2184-2200,共17页
Medical image segmentation is of critical importance in the domain of contemporary medical imaging.However,U-Net and its variants exhibit limitations in capturing complex nonlinear patterns and global contextual infor... Medical image segmentation is of critical importance in the domain of contemporary medical imaging.However,U-Net and its variants exhibit limitations in capturing complex nonlinear patterns and global contextual information.Although the subsequent U-KAN model enhances nonlinear representation capabilities,it still faces challenges such as gradient vanishing during deep network training and spatial detail loss during feature downsampling,resulting in insufficient segmentation accuracy for edge structures and minute lesions.To address these challenges,this paper proposes the RE-UKAN model,which innovatively improves upon U-KAN.Firstly,a residual network is introduced into the encoder to effectively mitigate gradient vanishing through cross-layer identity mappings,thus enhancing modelling capabilities for complex pathological structures.Secondly,Efficient Local Attention(ELA)is integrated to suppress spatial detail loss during downsampling,thereby improving the perception of edge structures and minute lesions.Experimental results on four public datasets demonstrate that RE-UKAN outperforms existing medical image segmentation methods across multiple evaluation metrics,with particularly outstanding performance on the TN-SCUI 2020 dataset,achieving IoU of 88.18%and Dice of 93.57%.Compared to the baseline model,it achieves improvements of 3.05%and 1.72%,respectively.These results fully demonstrate RE-UKAN’s superior detail retention capability and boundary recognition accuracy in complex medical image segmentation tasks,providing a reliable solution for clinical precision segmentation. 展开更多
关键词 image segmentation U-KAN residual network ELA
在线阅读 下载PDF
A Novel Semi-Supervised Multi-View Picture Fuzzy Clustering Approach for Enhanced Satellite Image Segmentation
4
作者 Pham Huy Thong Hoang Thi Canh +2 位作者 Nguyen Tuan Huy Nguyen Long Giang Luong Thi Hong Lan 《Computers, Materials & Continua》 2026年第3期1092-1117,共26页
Satellite image segmentation plays a crucial role in remote sensing,supporting applications such as environmental monitoring,land use analysis,and disaster management.However,traditional segmentation methods often rel... Satellite image segmentation plays a crucial role in remote sensing,supporting applications such as environmental monitoring,land use analysis,and disaster management.However,traditional segmentation methods often rely on large amounts of labeled data,which are costly and time-consuming to obtain,especially in largescale or dynamic environments.To address this challenge,we propose the Semi-Supervised Multi-View Picture Fuzzy Clustering(SS-MPFC)algorithm,which improves segmentation accuracy and robustness,particularly in complex and uncertain remote sensing scenarios.SS-MPFC unifies three paradigms:semi-supervised learning,multi-view clustering,and picture fuzzy set theory.This integration allows the model to effectively utilize a small number of labeled samples,fuse complementary information from multiple data views,and handle the ambiguity and uncertainty inherent in satellite imagery.We design a novel objective function that jointly incorporates picture fuzzy membership functions across multiple views of the data,and embeds pairwise semi-supervised constraints(must-link and cannot-link)directly into the clustering process to enhance segmentation accuracy.Experiments conducted on several benchmark satellite datasets demonstrate that SS-MPFC significantly outperforms existing state-of-the-art methods in segmentation accuracy,noise robustness,and semantic interpretability.On the Augsburg dataset,SS-MPFC achieves a Purity of 0.8158 and an Accuracy of 0.6860,highlighting its outstanding robustness and efficiency.These results demonstrate that SSMPFC offers a scalable and effective solution for real-world satellite-based monitoring systems,particularly in scenarios where rapid annotation is infeasible,such as wildfire tracking,agricultural monitoring,and dynamic urban mapping. 展开更多
关键词 Multi-view clustering satellite image segmentation semi-supervised learning picture fuzzy sets remote sensing
在线阅读 下载PDF
C-SegNet:a practical approach for automated diabetic macular edema segmentation in optical coherence tomography images
5
作者 Zhi-Yuan Guan Ge Deng +6 位作者 Shi-Long Shi Zhen Tang Xian-Kun Dong Qiu-Yi Li Shu-Jing Shen Yong-Ling He Xue-Jun Qiu 《Biomedical Engineering Communications》 2026年第2期15-22,共8页
Background:Diabetic macular edema is a prevalent retinal condition and a leading cause of visual impairment among diabetic patients’Early detection of affected areas is beneficial for effective diagnosis and treatmen... Background:Diabetic macular edema is a prevalent retinal condition and a leading cause of visual impairment among diabetic patients’Early detection of affected areas is beneficial for effective diagnosis and treatment.Traditionally,diagnosis relies on optical coherence tomography imaging technology interpreted by ophthalmologists.However,this manual image interpretation is often slow and subjective.Therefore,developing automated segmentation for macular edema images is essential to enhance to improve the diagnosis efficiency and accuracy.Methods:In order to improve clinical diagnostic efficiency and accuracy,we proposed a SegNet network structure integrated with a convolutional block attention module(CBAM).This network introduces a multi-scale input module,the CBAM attention mechanism,and jump connection.The multi-scale input module enhances the network’s perceptual capabilities,while the lightweight CBAM effectively fuses relevant features across channels and spatial dimensions,allowing for better learning of varying information levels.Results:Experimental results demonstrate that the proposed network achieves an IoU of 80.127%and an accuracy of 99.162%.Compared to the traditional segmentation network,this model has fewer parameters,faster training and testing speed,and superior performance on semantic segmentation tasks,indicating its highly practical applicability.Conclusion:The C-SegNet proposed in this study enables accurate segmentation of Diabetic macular edema lesion images,which facilitates quicker diagnosis for healthcare professionals. 展开更多
关键词 multi-scale input diabetic macular edema image segmentation optical coherence tomography
在线阅读 下载PDF
BiCLIP-nnFormer:A Virtual Multimodal Instrument for Efficient and Accurate Medical Image Segmentation 被引量:2
6
作者 Wang Bo Yue Yan +5 位作者 Mengyuan Xu Yuqun Yang Xu Tang Kechen Shu Jingyang Ai Zheng You 《Instrumentation》 2025年第2期1-13,共13页
Image segmentation is attracting increasing attention in the field of medical image analysis.Since widespread utilization across various medical applications,ensuring and improving segmentation accuracy has become a c... Image segmentation is attracting increasing attention in the field of medical image analysis.Since widespread utilization across various medical applications,ensuring and improving segmentation accuracy has become a crucial topic of research.With advances in deep learning,researchers have developed numerous methods that combine Transformers and convolutional neural networks(CNNs)to create highly accurate models for medical image segmentation.However,efforts to further enhance accuracy by developing larger and more complex models or training with more extensive datasets,significantly increase computational resource consumption.To address this problem,we propose BiCLIP-nnFormer(the prefix"Bi"refers to the use of two distinct CLIP models),a virtual multimodal instrument that leverages CLIP models to enhance the segmentation performance of a medical segmentation model nnFormer.Since two CLIP models(PMC-CLIP and CoCa-CLIP)are pre-trained on large datasets,they do not require additional training,thus conserving computation resources.These models are used offline to extract image and text embeddings from medical images.These embeddings are then processed by the proposed 3D CLIP adapter,which adapts the CLIP knowledge for segmentation tasks by fine-tuning.Finally,the adapted embeddings are fused with feature maps extracted from the nnFormer encoder for generating predicted masks.This process enriches the representation capabilities of the feature maps by integrating global multimodal information,leading to more precise segmentation predictions.We demonstrate the superiority of BiCLIP-nnFormer and the effectiveness of using CLIP models to enhance nnFormer through experiments on two public datasets,namely the Synapse multi-organ segmentation dataset(Synapse)and the Automatic Cardiac Diagnosis Challenge dataset(ACDC),as well as a self-annotated lung multi-category segmentation dataset(LMCS). 展开更多
关键词 medical image analysis image segmentation CLIP feature fusion deep learning
原文传递
Anomaly monitoring and early warning of electric moped charging device with infrared image 被引量:1
7
作者 LI Jiamin HAN Bo JIANG Mingshun 《Optoelectronics Letters》 2025年第3期136-141,共6页
Potential high-temperature risks exist in heat-prone components of electric moped charging devices,such as sockets,interfaces,and controllers.Traditional detection methods have limitations in terms of real-time perfor... Potential high-temperature risks exist in heat-prone components of electric moped charging devices,such as sockets,interfaces,and controllers.Traditional detection methods have limitations in terms of real-time performance and monitoring scope.To address this,a temperature detection method based on infrared image processing has been proposed:utilizing the median filtering algorithm to denoise the original infrared image,then applying an image segmentation algorithm to divide the image. 展开更多
关键词 detection methods divide image anomaly monitoring temperature detection median filtering algorithm infrared image processing image segmentation algorithm electric moped charging devicessuch
原文传递
Rendered image denoising method with filtering guided by lighting information 被引量:1
8
作者 MA Minghui HU Xiaojuan +2 位作者 ZHANG Ripei CHEN Chunyi YU Haiyang 《Optoelectronics Letters》 2025年第4期242-248,共7页
The visual noise of each light intensity area is different when the image is drawn by Monte Carlo method.However,the existing denoising algorithms have limited denoising performance under complex lighting conditions a... The visual noise of each light intensity area is different when the image is drawn by Monte Carlo method.However,the existing denoising algorithms have limited denoising performance under complex lighting conditions and are easy to lose detailed information.So we propose a rendered image denoising method with filtering guided by lighting information.First,we design an image segmentation algorithm based on lighting information to segment the image into different illumination areas.Then,we establish the parameter prediction model guided by lighting information for filtering(PGLF)to predict the filtering parameters of different illumination areas.For different illumination areas,we use these filtering parameters to construct area filters,and the filters are guided by the lighting information to perform sub-area filtering.Finally,the filtering results are fused with auxiliary features to output denoised images for improving the overall denoising effect of the image.Under the physically based rendering tool(PBRT)scene and Tungsten dataset,the experimental results show that compared with other guided filtering denoising methods,our method improves the peak signal-to-noise ratio(PSNR)metrics by 4.2164 dB on average and the structural similarity index(SSIM)metrics by 7.8%on average.This shows that our method can better reduce the noise in complex lighting scenesand improvethe imagequality. 展开更多
关键词 establish paramet rendered image denoising Monte Carlo method filtering guided lighting information denoising algorithms image segmentation algorithm rendered image denoising method monte carlo methodhoweverthe
原文传递
Pre-trained SAM as data augmentation for image segmentation 被引量:1
9
作者 Junjun Wu Yunbo Rao +1 位作者 Shaoning Zeng Bob Zhang 《CAAI Transactions on Intelligence Technology》 2025年第1期268-282,共15页
Data augmentation plays an important role in training deep neural model by expanding the size and diversity of the dataset.Initially,data augmentation mainly involved some simple transformations of images.Later,in ord... Data augmentation plays an important role in training deep neural model by expanding the size and diversity of the dataset.Initially,data augmentation mainly involved some simple transformations of images.Later,in order to increase the diversity and complexity of data,more advanced methods appeared and evolved to sophisticated generative models.However,these methods required a mass of computation of training or searching.In this paper,a novel training-free method that utilises the Pre-Trained Segment Anything Model(SAM)model as a data augmentation tool(PTSAM-DA)is proposed to generate the augmented annotations for images.Without the need for training,it obtains prompt boxes from the original annotations and then feeds the boxes to the pre-trained SAM to generate diverse and improved annotations.In this way,annotations are augmented more ingenious than simple manipulations without incurring huge computation for training a data augmentation model.Multiple comparative experiments on three datasets are conducted,including an in-house dataset,ADE20K and COCO2017.On this in-house dataset,namely Agricultural Plot Segmentation Dataset,maximum improvements of 3.77%and 8.92%are gained in two mainstream metrics,mIoU and mAcc,respectively.Consequently,large vision models like SAM are proven to be promising not only in image segmentation but also in data augmentation. 展开更多
关键词 data augmentation image segmentation large model segment anything model
在线阅读 下载PDF
M2ANet:Multi-branch and multi-scale attention network for medical image segmentation 被引量:1
10
作者 Wei Xue Chuanghui Chen +3 位作者 Xuan Qi Jian Qin Zhen Tang Yongsheng He 《Chinese Physics B》 2025年第8期547-559,共13页
Convolutional neural networks(CNNs)-based medical image segmentation technologies have been widely used in medical image segmentation because of their strong representation and generalization abilities.However,due to ... Convolutional neural networks(CNNs)-based medical image segmentation technologies have been widely used in medical image segmentation because of their strong representation and generalization abilities.However,due to the inability to effectively capture global information from images,CNNs can easily lead to loss of contours and textures in segmentation results.Notice that the transformer model can effectively capture the properties of long-range dependencies in the image,and furthermore,combining the CNN and the transformer can effectively extract local details and global contextual features of the image.Motivated by this,we propose a multi-branch and multi-scale attention network(M2ANet)for medical image segmentation,whose architecture consists of three components.Specifically,in the first component,we construct an adaptive multi-branch patch module for parallel extraction of image features to reduce information loss caused by downsampling.In the second component,we apply residual block to the well-known convolutional block attention module to enhance the network’s ability to recognize important features of images and alleviate the phenomenon of gradient vanishing.In the third component,we design a multi-scale feature fusion module,in which we adopt adaptive average pooling and position encoding to enhance contextual features,and then multi-head attention is introduced to further enrich feature representation.Finally,we validate the effectiveness and feasibility of the proposed M2ANet method through comparative experiments on four benchmark medical image segmentation datasets,particularly in the context of preserving contours and textures. 展开更多
关键词 medical image segmentation convolutional neural network multi-branch attention multi-scale feature fusion
原文传递
3D medical image segmentation using the serial-parallel convolutional neural network and transformer based on crosswindow self-attention 被引量:1
11
作者 Bin Yu Quan Zhou +3 位作者 Li Yuan Huageng Liang Pavel Shcherbakov Xuming Zhang 《CAAI Transactions on Intelligence Technology》 2025年第2期337-348,共12页
Convolutional neural network(CNN)with the encoder-decoder structure is popular in medical image segmentation due to its excellent local feature extraction ability but it faces limitations in capturing the global featu... Convolutional neural network(CNN)with the encoder-decoder structure is popular in medical image segmentation due to its excellent local feature extraction ability but it faces limitations in capturing the global feature.The transformer can extract the global information well but adapting it to small medical datasets is challenging and its computational complexity can be heavy.In this work,a serial and parallel network is proposed for the accurate 3D medical image segmentation by combining CNN and transformer and promoting feature interactions across various semantic levels.The core components of the proposed method include the cross window self-attention based transformer(CWST)and multi-scale local enhanced(MLE)modules.The CWST module enhances the global context understanding by partitioning 3D images into non-overlapping windows and calculating sparse global attention between windows.The MLE module selectively fuses features by computing the voxel attention between different branch features,and uses convolution to strengthen the dense local information.The experiments on the prostate,atrium,and pancreas MR/CT image datasets consistently demonstrate the advantage of the proposed method over six popular segmentation models in both qualitative evaluation and quantitative indexes such as dice similarity coefficient,Intersection over Union,95%Hausdorff distance and average symmetric surface distance. 展开更多
关键词 convolution neural network cross window self‐attention medical image segmentation transformer
在线阅读 下载PDF
Stochastic Augmented-Based Dual-Teaching for Semi-Supervised Medical Image Segmentation
12
作者 Hengyang Liu Yang Yuan +2 位作者 Pengcheng Ren Chengyun Song Fen Luo 《Computers, Materials & Continua》 SCIE EI 2025年第1期543-560,共18页
Existing semi-supervisedmedical image segmentation algorithms use copy-paste data augmentation to correct the labeled-unlabeled data distribution mismatch.However,current copy-paste methods have three limitations:(1)t... Existing semi-supervisedmedical image segmentation algorithms use copy-paste data augmentation to correct the labeled-unlabeled data distribution mismatch.However,current copy-paste methods have three limitations:(1)training the model solely with copy-paste mixed pictures from labeled and unlabeled input loses a lot of labeled information;(2)low-quality pseudo-labels can cause confirmation bias in pseudo-supervised learning on unlabeled data;(3)the segmentation performance in low-contrast and local regions is less than optimal.We design a Stochastic Augmentation-Based Dual-Teaching Auxiliary Training Strategy(SADT),which enhances feature diversity and learns high-quality features to overcome these problems.To be more precise,SADT trains the Student Network by using pseudo-label-based training from Teacher Network 1 and supervised learning with labeled data,which prevents the loss of rare labeled data.We introduce a bi-directional copy-pastemask with progressive high-entropy filtering to reduce data distribution disparities and mitigate confirmation bias in pseudo-supervision.For the mixed images,Deep-Shallow Spatial Contrastive Learning(DSSCL)is proposed in the feature spaces of Teacher Network 2 and the Student Network to improve the segmentation capabilities in low-contrast and local areas.In this procedure,the features retrieved by the Student Network are subjected to a random feature perturbation technique.On two openly available datasets,extensive trials show that our proposed SADT performs much better than the state-ofthe-art semi-supervised medical segmentation techniques.Using only 10%of the labeled data for training,SADT was able to acquire a Dice score of 90.10%on the ACDC(Automatic Cardiac Diagnosis Challenge)dataset. 展开更多
关键词 SEMI-SUPERVISED medical image segmentation contrastive learning stochastic augmented
在线阅读 下载PDF
VSMI^(2)-PANet:Versatile Scale-Malleable Image Integration and Patch Wise Attention Network With Transformer for Lung Tumour Segmentation Using Multi-Modal Imaging Techniques
13
作者 Nayef Alqahtani Arfat Ahmad Khan +1 位作者 Rakesh Kumar Mahendran Muhammad Faheem 《CAAI Transactions on Intelligence Technology》 2025年第5期1376-1393,共18页
Lung cancer(LC)is a major cancer which accounts for higher mortality rates worldwide.Doctors utilise many imaging modalities for identifying lung tumours and their severity in earlier stages.Nowadays,machine learning(... Lung cancer(LC)is a major cancer which accounts for higher mortality rates worldwide.Doctors utilise many imaging modalities for identifying lung tumours and their severity in earlier stages.Nowadays,machine learning(ML)and deep learning(DL)methodologies are utilised for the robust detection and prediction of lung tumours.Recently,multi modal imaging emerged as a robust technique for lung tumour detection by combining various imaging features.To cope with that,we propose a novel multi modal imaging technique named versatile scale malleable image integration and patch wise attention network(VSMI2−PANet)which adopts three imaging modalities named computed tomography(CT),magnetic resonance imaging(MRI)and single photon emission computed tomography(SPECT).The designed model accepts input from CT and MRI images and passes it to the VSMI2 module that is composed of three sub-modules named image cropping module,scale malleable convolution layer(SMCL)and PANet module.CT and MRI images are subjected to image cropping module in a parallel manner to crop the meaningful image patches and provide them to the SMCL module.The SMCL module is composed of adaptive convolutional layers that investigate those patches in a parallel manner by preserving the spatial information.The output from the SMCL is then fused and provided to the PANet module.The PANet module examines the fused patches by analysing its height,width and channels of the image patch.As a result,it provides an output as high-resolution spatial attention maps indicating the location of suspicious tumours.The high-resolution spatial attention maps are then provided as an input to the backbone module which uses light wave transformer(LWT)for segmenting the lung tumours into three classes,such as normal,benign and malignant.In addition,the LWT also accepts SPECT image as input for capturing the variations precisely to segment the lung tumours.The performance of the proposed model is validated using several performance metrics,such as accuracy,precision,recall,F1-score and AUC curve,and the results show that the proposed work outperforms the existing approaches. 展开更多
关键词 computational intelligence computer vision data fusion deep learning feature extraction image segmentation
在线阅读 下载PDF
Deep Learning in Biomedical Image and Signal Processing:A Survey
14
作者 Batyrkhan Omarov 《Computers, Materials & Continua》 2025年第11期2195-2253,共59页
Deep learning now underpins many state-of-the-art systems for biomedical image and signal processing,enabling automated lesion detection,physiological monitoring,and therapy planning with accuracy that rivals expert p... Deep learning now underpins many state-of-the-art systems for biomedical image and signal processing,enabling automated lesion detection,physiological monitoring,and therapy planning with accuracy that rivals expert performance.This survey reviews the principal model families as convolutional,recurrent,generative,reinforcement,autoencoder,and transfer-learning approaches as emphasising how their architectural choices map to tasks such as segmentation,classification,reconstruction,and anomaly detection.A dedicated treatment of multimodal fusion networks shows how imaging features can be integrated with genomic profiles and clinical records to yield more robust,context-aware predictions.To support clinical adoption,we outline post-hoc explainability techniques(Grad-CAM,SHAP,LIME)and describe emerging intrinsically interpretable designs that expose decision logic to end users.Regulatory guidance from the U.S.FDA,the European Medicines Agency,and the EU AI Act is summarised,linking transparency and lifecycle-monitoring requirements to concrete development practices.Remaining challenges as data imbalance,computational cost,privacy constraints,and cross-domain generalization are discussed alongside promising solutions such as federated learning,uncertainty quantification,and lightweight 3-D architectures.The article therefore offers researchers,clinicians,and policymakers a concise,practice-oriented roadmap for deploying trustworthy deep-learning systems in healthcare. 展开更多
关键词 Deep learning biomedical imaging signal processing neural networks image segmentation disease classification drug discovery patient monitoring robotic surgery artificial intelligence in healthcare
在线阅读 下载PDF
Novel image segmentation model of multi-view sheep face for identity recognition
15
作者 Suhui Liu Guangpu Wang +2 位作者 Chuanzhong Xuan Zhaohui Tang Junze Jia 《International Journal of Agricultural and Biological Engineering》 2025年第6期260-268,共9页
Traditional sheep identification is based on ear tags.However,the application of ear tags not only causes stress to the animals but also leads to loss of ear tags,which affects the correct recognition of sheep identit... Traditional sheep identification is based on ear tags.However,the application of ear tags not only causes stress to the animals but also leads to loss of ear tags,which affects the correct recognition of sheep identity.In contrast,the acquisition of sheep face images offers the advantages of being non-invasive and stress-free for the animals.Nevertheless,the extant convolutional neural network-based sheep face identification model is prone to the issue of inadequate refinement,which renders its implementation on farms challenging.To address this issue,this study presented a novel sheep face recognition model that employs advanced feature fusion techniques and precise image segmentation strategies.The images were preprocessed and accurately segmented using deep learning techniques,with a dataset constructed containing sheep face images from multiple viewpoints(left,front,and right faces).In particular,the model employs a segmentation algorithm to delineate the sheep face region accurately,utilizes the Improved Convolutional Block Attention Module(I-CBAM)to emphasize the salient features of the sheep face,and achieves multi-scale fusion of the features through a Feature Pyramid Network(FPN).This process guarantees that the features captured from disparate viewpoints can be efficiently integrated to enhance recognition accuracy.Furthermore,the model guarantees the precise delineation of sheep facial contours by streamlining the image segmentation procedure,thereby establishing a robust basis for the precise identification of sheep identity.The findings demonstrate that the recognition accuracy of the Sheep Face Mask Region-based Convolutional Neural Network(SFMask RCNN)model has been enhanced by 9.64%to 98.65%in comparison to the original model.The method offers a novel technological approach to the management of animal identity in the context of sheep husbandry. 展开更多
关键词 image segmentation sheep face deep learning MULTI-VIEW feature fusion
原文传递
U-Net-Based Medical Image Segmentation:A Comprehensive Analysis and Performance Review
16
作者 Aliyu Abdulfatah Zhang Sheng Yirga Eyasu Tenawerk 《Journal of Electronic Research and Application》 2025年第1期202-208,共7页
Medical image segmentation has become a cornerstone for many healthcare applications,allowing for the automated extraction of critical information from images such as Computed Tomography(CT)scans,Magnetic Resonance Im... Medical image segmentation has become a cornerstone for many healthcare applications,allowing for the automated extraction of critical information from images such as Computed Tomography(CT)scans,Magnetic Resonance Imaging(MRIs),and X-rays.The introduction of U-Net in 2015 has significantly advanced segmentation capabilities,especially for small datasets commonly found in medical imaging.Since then,various modifications to the original U-Net architecture have been proposed to enhance segmentation accuracy and tackle challenges like class imbalance,data scarcity,and multi-modal image processing.This paper provides a detailed review and comparison of several U-Net-based architectures,focusing on their effectiveness in medical image segmentation tasks.We evaluate performance metrics such as Dice Similarity Coefficient(DSC)and Intersection over Union(IoU)across different U-Net variants including HmsU-Net,CrossU-Net,mResU-Net,and others.Our results indicate that architectural enhancements such as transformers,attention mechanisms,and residual connections improve segmentation performance across diverse medical imaging applications,including tumor detection,organ segmentation,and lesion identification.The study also identifies current challenges in the field,including data variability,limited dataset sizes,and issues with class imbalance.Based on these findings,the paper suggests potential future directions for improving the robustness and clinical applicability of U-Net-based models in medical image segmentation. 展开更多
关键词 U-Net architecture Medical image segmentation DSC IOU Transformer-based segmentation
在线阅读 下载PDF
Mapping discrete forest age classes of Mediterranean pinelands since the pre‑satellite era using historical orthoimage mosaics and machine learning
17
作者 Vicent A.Ribas‑Costa Andrew Trlica Aitor Gastón 《Journal of Forestry Research》 2025年第6期187-207,共21页
Land use/land cover(LULC)change monitoring is critical for understanding environmental and socioeconomic processes and to identify patterns that may affect current and future land management.Forest cover evolution in ... Land use/land cover(LULC)change monitoring is critical for understanding environmental and socioeconomic processes and to identify patterns that may affect current and future land management.Forest cover evolution in the Mediterranean region has been studied to better understand forest succession,wildfires potential,and carbon stock assessment for climate change mitigation,among other reasons.However,though multiple sources of current LULC exist,data from last century's forest cover are less common,and are normally still reliant on locally orthophoto-interpreted data,making continuous maps of historical forest cover relatively uncommon.In this work,a pipeline based on image segmentation and random forest LULC modeling was developed to process three high resolution orthophotos(1956,1989,and 2021)into LULC continuous land cover maps of Spain's island of Ibiza.Next,they were combined to quantify forest evolution of Mediterranean Aleppo pine(Pinus halepensis Mill.)and to generate a continuous map of forest age classes.Our models were able to differentiate forestland with an accuracy higher than 80%in all cases,and were able to approximate forestland cover change since the mid-twentieth century,estimating 21,165±252 ha(37.0±0.4%)in 1956,27,099±472 ha(46.8±0.8%)in 1989,and 30,195±302 ha(52.8±0.5%)in 2021,with a mean increase of 139±6 ha(0.46±0.02%,calculated from current forest cover estimate)per year.The most important variables for the identification of the forestland were the terrain slope and the image gray level or color information in all orthophotos.When combining the information from the three periods,the analysis of forest evolution revealed that a significant portion of current forest cover,approximately 15,776 ha,fell within the 75-120 year age range,while 5388 ha fell within the range of 42-74 years,and 9022 ha within the 10-41 years forest age class.Younger forests,except when mapped after known wildfires,were not considered due to the limitations of the methodology.When compared to forest age data based on ground measurements,significant differences were found among each of the remotely sensed forest age classes,with a mean difference of 13 years between the theoretical age class central value and the real observed plot average age.Overall,63%of the forest inventory plots were assigned with the correct forest age class.This work will allow a better understanding of long-term Mediterranean forest dynamics and will help landowners and policymakers to better respond to new landscape planning challenges and achieve sustainable development goals. 展开更多
关键词 Aerial orthophotos image segmentation Random forest Landscape evolution Forest age
在线阅读 下载PDF
Transformers for Multi-Modal Image Analysis in Healthcare
18
作者 Sameera V Mohd Sagheer Meghana K H +2 位作者 P M Ameer Muneer Parayangat Mohamed Abbas 《Computers, Materials & Continua》 2025年第9期4259-4297,共39页
Integrating multiple medical imaging techniques,including Magnetic Resonance Imaging(MRI),Computed Tomography,Positron Emission Tomography(PET),and ultrasound,provides a comprehensive view of the patient health status... Integrating multiple medical imaging techniques,including Magnetic Resonance Imaging(MRI),Computed Tomography,Positron Emission Tomography(PET),and ultrasound,provides a comprehensive view of the patient health status.Each of these methods contributes unique diagnostic insights,enhancing the overall assessment of patient condition.Nevertheless,the amalgamation of data from multiple modalities presents difficulties due to disparities in resolution,data collection methods,and noise levels.While traditional models like Convolutional Neural Networks(CNNs)excel in single-modality tasks,they struggle to handle multi-modal complexities,lacking the capacity to model global relationships.This research presents a novel approach for examining multi-modal medical imagery using a transformer-based system.The framework employs self-attention and cross-attention mechanisms to synchronize and integrate features across various modalities.Additionally,it shows resilience to variations in noise and image quality,making it adaptable for real-time clinical use.To address the computational hurdles linked to transformer models,particularly in real-time clinical applications in resource-constrained environments,several optimization techniques have been integrated to boost scalability and efficiency.Initially,a streamlined transformer architecture was adopted to minimize the computational load while maintaining model effectiveness.Methods such as model pruning,quantization,and knowledge distillation have been applied to reduce the parameter count and enhance the inference speed.Furthermore,efficient attention mechanisms such as linear or sparse attention were employed to alleviate the substantial memory and processing requirements of traditional self-attention operations.For further deployment optimization,researchers have implemented hardware-aware acceleration strategies,including the use of TensorRT and ONNX-based model compression,to ensure efficient execution on edge devices.These optimizations allow the approach to function effectively in real-time clinical settings,ensuring viability even in environments with limited resources.Future research directions include integrating non-imaging data to facilitate personalized treatment and enhancing computational efficiency for implementation in resource-limited environments.This study highlights the transformative potential of transformer models in multi-modal medical imaging,offering improvements in diagnostic accuracy and patient care outcomes. 展开更多
关键词 Multi-modal image analysis medical imaging deep learning image segmentation disease detection multi-modal fusion Vision Transformers(ViTs) precision medicine clinical decision support
在线阅读 下载PDF
Automatic diagnosis of agromyzid leafminer damage levels using leaf images captured by AR glasses
19
作者 Zhongru Ye Yongjian Liu +10 位作者 Fuyu Ye Hang Li Ju Luo Jianyang Guo Zelin Feng Chen Hong Lingyi Li Shuhua Liu Baojun Yang Wanxue Liu Qing Yao 《Journal of Integrative Agriculture》 2025年第9期3559-3573,共15页
Agromyzid leafminers cause significant economic losses in both vegetable and horticultural crops,and precise assessments of pesticide needs must be based on the extent of leaf damage.Traditionally,surveyors estimate t... Agromyzid leafminers cause significant economic losses in both vegetable and horticultural crops,and precise assessments of pesticide needs must be based on the extent of leaf damage.Traditionally,surveyors estimate the damage by visually comparing the proportion of damaged to intact leaf area,a method that lacks objectivity,precision,and reliable data traceability.To address these issues,an advanced survey system that combines augmented reality(AR)glasses with a camera and an artificial intelligence(AI)algorithm was developed in this study to objectively and accurately assess leafminer damage in the feld.By wearing AR glasses equipped with a voice-controlled camera,surveyors can easily flatten damaged leaves by hand and capture images for analysis.This method can provide a precise and reliable diagnosis of leafminer damage levels,which in turn supports the implementation of scientifically grounded and targeted pest management strategies.To calculate the leafminer damage level,the DeepLab-Leafminer model was proposed to precisely segment the leafminer-damaged regions and the intact leaf region.The integration of an edge-aware module and a Canny loss function into the DeepLabv3+model enhanced the DeepLab-Leafminer model's capability to accurately segment the edges of leafminer-damaged regions,which often exhibit irregular shapes.Compared with state-of-the-art segmentation models,the DeepLabLeafminer model achieved superior segmentation performance with an Intersection over Union(IoU)of 81.23%and an F1score of 87.92%on leafminer-damaged leaves.The test results revealed a 92.38%diagnosis accuracy of leafminer damage levels based on the DeepLab-Leafminer model.A mobile application and a web platform were developed to assist surveyors in displaying the diagnostic results of leafminer damage levels.This system provides surveyors with an advanced,user-friendly,and accurate tool for assessing agromyzid leafminer damage in agricultural felds using wearable AR glasses and an AI model.This method can also be utilized to automatically diagnose pest and disease damage levels in other crops based on leaf images. 展开更多
关键词 agromyzid leafminer plant leaf image damage level AR glasses DeepLabv3+model image segmentation
在线阅读 下载PDF
An EnFCM remote sensing image forest land extraction method based on PCA multi-feature fusion
20
作者 ZHU Shengyang WANG Xiaopeng +2 位作者 WEI Tongyi FAN Weiwei SONG Yubo 《Journal of Measurement Science and Instrumentation》 2025年第2期216-223,共8页
The traditional EnFCM(Enhanced fuzzy C-means)algorithm only considers the grey-scale features in image segmentation,resulting in less than satisfactory results when the algorithm is used for remote sensing woodland im... The traditional EnFCM(Enhanced fuzzy C-means)algorithm only considers the grey-scale features in image segmentation,resulting in less than satisfactory results when the algorithm is used for remote sensing woodland image segmentation and extraction.An EnFCM remote sensing forest land extraction method based on PCA multi-feature fusion was proposed.Firstly,histogram equalization was applied to improve the image contrast.Secondly,the texture and edge features of the image were extracted,and a multi-feature fused pixel image was generated using the PCA technique.Moreover,the fused feature was used as a feature constraint to measure the difference of pixels instead of a single grey-scale feature.Finally,an improved feature distance metric calculated the similarity between the pixel points and the cluster center to complete the cluster segmentation.The experimental results showed that the error was between 1.5%and 4.0%compared with the forested area counted by experts’hand-drawing,which could obtain a high accuracy segmentation and extraction result. 展开更多
关键词 image segmentation forest land extraction PCA transform multi-feature fusion EnFCM algorithm
在线阅读 下载PDF
上一页 1 2 26 下一页 到第
使用帮助 返回顶部