Wearable sensors integrated with deep learning techniques have the potential to revolutionize seamless human-machine interfaces for real-time health monitoring,clinical diagnosis,and robotic applications.Nevertheless,...Wearable sensors integrated with deep learning techniques have the potential to revolutionize seamless human-machine interfaces for real-time health monitoring,clinical diagnosis,and robotic applications.Nevertheless,it remains a critical challenge to simultaneously achieve desirable mechanical and electrical performance along with biocompatibility,adhesion,self-healing,and environmental robustness with excellent sensing metrics.Herein,we report a multifunctional,anti-freezing,selfadhesive,and self-healable organogel pressure sensor composed of cobalt nanoparticle encapsulated nitrogen-doped carbon nanotubes(CoN CNT)embedded in a polyvinyl alcohol-gelatin(PVA/GLE)matrix.Fabricated using a binary solvent system of water and ethylene glycol(EG),the CoN CNT/PVA/GLE organogel exhibits excellent flexibility,biocompatibility,and temperature tolerance with remarkable environmental stability.Electrochemical impedance spectroscopy confirms near-stable performance across a broad humidity range(40%-95%RH).Freeze-tolerant conductivity under sub-zero conditions(-20℃)is attributed to the synergistic role of CoN CNT and EG,preserving mobility and network integrity.The Co N CNT/PVA/GLE organogel sensor exhibits high sensitivity of 5.75 k Pa^(-1)in the detection range from 0 to 20 k Pa,ideal for subtle biomechanical motion detection.A smart human-machine interface for English letter recognition using deep learning achieved 98%accuracy.The organogel sensor utility was extended to detect human gestures like finger bending,wrist motion,and throat vibration during speech.展开更多
Modern manufacturing processes have become more reliant on automation because of the accelerated transition from Industry 3.0 to Industry 4.0.Manual inspection of products on assembly lines remains inefficient,prone t...Modern manufacturing processes have become more reliant on automation because of the accelerated transition from Industry 3.0 to Industry 4.0.Manual inspection of products on assembly lines remains inefficient,prone to errors and lacks consistency,emphasizing the need for a reliable and automated inspection system.Leveraging both object detection and image segmentation approaches,this research proposes a vision-based solution for the detection of various kinds of tools in the toolkit using deep learning(DL)models.Two Intel RealSense D455f depth cameras were arranged in a top down configuration to capture both RGB and depth images of the toolkits.After applying multiple constraints and enhancing them through preprocessing and augmentation,a dataset consisting of 3300 annotated RGB-D photos was generated.Several DL models were selected through a comprehensive assessment of mean Average Precision(mAP),precision-recall equilibrium,inference latency(target≥30 FPS),and computational burden,resulting in a preference for YOLO and Region-based Convolutional Neural Networks(R-CNN)variants over ViT-based models due to the latter’s increased latency and resource requirements.YOLOV5,YOLOV8,YOLOV11,Faster R-CNN,and Mask R-CNN were trained on the annotated dataset and evaluated using key performance metrics(Recall,Accuracy,F1-score,and Precision).YOLOV11 demonstrated balanced excellence with 93.0%precision,89.9%recall,and a 90.6%F1-score in object detection,as well as 96.9%precision,95.3%recall,and a 96.5%F1-score in instance segmentation with an average inference time of 25 ms per frame(≈40 FPS),demonstrating real-time performance.Leveraging these results,a YOLOV11-based windows application was successfully deployed in a real-time assembly line environment,where it accurately processed live video streams to detect and segment tools within toolkits,demonstrating its practical effectiveness in industrial automation.The application is capable of precisely measuring socket dimensions by utilising edge detection techniques on YOLOv11 segmentation masks,in addition to detection and segmentation.This makes it possible to do specification-level quality control right on the assembly line,which improves the ability to examine things in real time.The implementation is a big step forward for intelligent manufacturing in the Industry 4.0 paradigm.It provides a scalable,efficient,and accurate way to do automated inspection and dimensional verification activities.展开更多
Underwater pipeline inspection plays a vital role in the proactive maintenance and management of critical marine infrastructure and subaquatic systems.However,the inspection of underwater pipelines presents a challeng...Underwater pipeline inspection plays a vital role in the proactive maintenance and management of critical marine infrastructure and subaquatic systems.However,the inspection of underwater pipelines presents a challenge due to factors such as light scattering,absorption,restricted visibility,and ambient noise.The advancement of deep learning has introduced powerful techniques for processing large amounts of unstructured and imperfect data collected from underwater environments.This study evaluated the efficacy of the You Only Look Once(YOLO)algorithm,a real-time object detection and localization model based on convolutional neural networks,in identifying and classifying various types of pipeline defects in underwater settings.YOLOv8,the latest evolution in the YOLO family,integrates advanced capabilities,such as anchor-free detection,a cross-stage partial network backbone for efficient feature extraction,and a feature pyramid network+path aggregation network neck for robust multi-scale object detection,which make it particularly well-suited for complex underwater environments.Due to the lack of suitable open-access datasets for underwater pipeline defects,a custom dataset was captured using a remotely operated vehicle in a controlled environment.This application has the following assets available for use.Extensive experimentation demonstrated that YOLOv8 X-Large consistently outperformed other models in terms of pipe defect detection and classification and achieved a strong balance between precision and recall in identifying pipeline cracks,rust,corners,defective welds,flanges,tapes,and holes.This research establishes the baseline performance of YOLOv8 for underwater defect detection and showcases its potential to enhance the reliability and efficiency of pipeline inspection tasks in challenging underwater environments.展开更多
The rapid growth of biomedical data,particularly multi-omics data including genomes,transcriptomics,proteomics,metabolomics,and epigenomics,medical research and clinical decision-making confront both new opportunities...The rapid growth of biomedical data,particularly multi-omics data including genomes,transcriptomics,proteomics,metabolomics,and epigenomics,medical research and clinical decision-making confront both new opportunities and obstacles.The huge and diversified nature of these datasets cannot always be managed using traditional data analysis methods.As a consequence,deep learning has emerged as a strong tool for analysing numerous omics data due to its ability to handle complex and non-linear relationships.This paper explores the fundamental concepts of deep learning and how they are used in multi-omics medical data mining.We demonstrate how autoencoders,variational autoencoders,multimodal models,attention mechanisms,transformers,and graph neural networks enable pattern analysis and recognition across all omics data.Deep learning has been found to be effective in illness classification,biomarker identification,gene network learning,and therapeutic efficacy prediction.We also consider critical problems like as data quality,model explainability,whether findings can be repeated,and computational power requirements.We now consider future elements of combining omics with clinical and imaging data,explainable AI,federated learning,and real-time diagnostics.Overall,this study emphasises the need of collaborating across disciplines to advance deep learning-based multi-omics research for precision medicine and comprehending complicated disorders.展开更多
This systematic review aims to comprehensively examine and compare deep learning methods for brain tumor segmentation and classification using MRI and other imaging modalities,focusing on recent trends from 2022 to 20...This systematic review aims to comprehensively examine and compare deep learning methods for brain tumor segmentation and classification using MRI and other imaging modalities,focusing on recent trends from 2022 to 2025.The primary objective is to evaluate methodological advancements,model performance,dataset usage,and existing challenges in developing clinically robust AI systems.We included peer-reviewed journal articles and highimpact conference papers published between 2022 and 2025,written in English,that proposed or evaluated deep learning methods for brain tumor segmentation and/or classification.Excluded were non-open-access publications,books,and non-English articles.A structured search was conducted across Scopus,Google Scholar,Wiley,and Taylor&Francis,with the last search performed in August 2025.Risk of bias was not formally quantified but considered during full-text screening based on dataset diversity,validation methods,and availability of performance metrics.We used narrative synthesis and tabular benchmarking to compare performance metrics(e.g.,accuracy,Dice score)across model types(CNN,Transformer,Hybrid),imaging modalities,and datasets.A total of 49 studies were included(43 journal articles and 6 conference papers).These studies spanned over 9 public datasets(e.g.,BraTS,Figshare,REMBRANDT,MOLAB)and utilized a range of imaging modalities,predominantly MRI.Hybrid models,especially ResViT and UNetFormer,consistently achieved high performance,with classification accuracy exceeding 98%and segmentation Dice scores above 0.90 across multiple studies.Transformers and hybrid architectures showed increasing adoption post2023.Many studies lacked external validation and were evaluated only on a few benchmark datasets,raising concerns about generalizability and dataset bias.Few studies addressed clinical interpretability or uncertainty quantification.Despite promising results,particularly for hybrid deep learning models,widespread clinical adoption remains limited due to lack of validation,interpretability concerns,and real-world deployment barriers.展开更多
Deep learning-based methods have become alternatives to traditional numerical weather prediction systems,offering faster computation and the ability to utilize large historical datasets.However,the application of deep...Deep learning-based methods have become alternatives to traditional numerical weather prediction systems,offering faster computation and the ability to utilize large historical datasets.However,the application of deep learning to medium-range regional weather forecasting with limited data remains a significant challenge.In this work,three key solutions are proposed:(1)motivated by the need to improve model performance in data-scarce regional forecasting scenarios,the authors innovatively apply semantic segmentation models,to better capture spatiotemporal features and improve prediction accuracy;(2)recognizing the challenge of overfitting and the inability of traditional noise-based data augmentation methods to effectively enhance model robustness,a novel learnable Gaussian noise mechanism is introduced that allows the model to adaptively optimize perturbations for different locations,ensuring more effective learning;and(3)to address the issue of error accumulation in autoregressive prediction,as well as the challenge of learning difficulty and the lack of intermediate data utilization in one-shot prediction,the authors propose a cascade prediction approach that effectively resolves these problems while significantly improving model forecasting performance.The method achieves a competitive result in The East China Regional AI Medium Range Weather Forecasting Competition.Ablation experiments further validate the effectiveness of each component,highlighting their contributions to enhancing prediction performance.展开更多
Honeycombing Lung(HCL)is a chronic lung condition marked by advanced fibrosis,resulting in enlarged air spaces with thick fibrotic walls,which are visible on Computed Tomography(CT)scans.Differentiating between normal...Honeycombing Lung(HCL)is a chronic lung condition marked by advanced fibrosis,resulting in enlarged air spaces with thick fibrotic walls,which are visible on Computed Tomography(CT)scans.Differentiating between normal lung tissue,honeycombing lungs,and Ground Glass Opacity(GGO)in CT images is often challenging for radiologists and may lead to misinterpretations.Although earlier studies have proposed models to detect and classify HCL,many faced limitations such as high computational demands,lower accuracy,and difficulty distinguishing between HCL and GGO.CT images are highly effective for lung classification due to their high resolution,3D visualization,and sensitivity to tissue density variations.This study introduces Honeycombing Lungs Network(HCL Net),a novel classification algorithm inspired by ResNet50V2 and enhanced to overcome the shortcomings of previous approaches.HCL Net incorporates additional residual blocks,refined preprocessing techniques,and selective parameter tuning to improve classification performance.The dataset,sourced from the University Malaya Medical Centre(UMMC)and verified by expert radiologists,consists of CT images of normal,honeycombing,and GGO lungs.Experimental evaluations across five assessments demonstrated that HCL Net achieved an outstanding classification accuracy of approximately 99.97%.It also recorded strong performance in other metrics,achieving 93%precision,100%sensitivity,89%specificity,and an AUC-ROC score of 97%.Comparative analysis with baseline feature engineering methods confirmed the superior efficacy of HCL Net.The model significantly reduces misclassification,particularly between honeycombing and GGO lungs,enhancing diagnostic precision and reliability in lung image analysis.展开更多
Heart disease remains a leading cause of mortality worldwide,emphasizing the urgent need for reliable and interpretable predictive models to support early diagnosis and timely intervention.However,existing Deep Learni...Heart disease remains a leading cause of mortality worldwide,emphasizing the urgent need for reliable and interpretable predictive models to support early diagnosis and timely intervention.However,existing Deep Learning(DL)approaches often face several limitations,including inefficient feature extraction,class imbalance,suboptimal classification performance,and limited interpretability,which collectively hinder their deployment in clinical settings.To address these challenges,we propose a novel DL framework for heart disease prediction that integrates a comprehensive preprocessing pipeline with an advanced classification architecture.The preprocessing stage involves label encoding and feature scaling.To address the issue of class imbalance inherent in the personal key indicators of the heart disease dataset,the localized random affine shadowsampling technique is employed,which enhances minority class representation while minimizing overfitting.At the core of the framework lies the Deep Residual Network(DeepResNet),which employs hierarchical residual transformations to facilitate efficient feature extraction and capture complex,non-linear relationships in the data.Experimental results demonstrate that the proposed model significantly outperforms existing techniques,achieving improvements of 3.26%in accuracy,3.16%in area under the receiver operating characteristics,1.09%in recall,and 1.07%in F1-score.Furthermore,robustness is validated using 10-fold crossvalidation,confirming the model’s generalizability across diverse data distributions.Moreover,model interpretability is ensured through the integration of Shapley additive explanations and local interpretable model-agnostic explanations,offering valuable insights into the contribution of individual features to model predictions.Overall,the proposed DL framework presents a robust,interpretable,and clinically applicable solution for heart disease prediction.展开更多
An image processing and deep learning method for identifying different types of rock images was proposed.Preprocessing,such as rock image acquisition,gray scaling,Gaussian blurring,and feature dimensionality reduction...An image processing and deep learning method for identifying different types of rock images was proposed.Preprocessing,such as rock image acquisition,gray scaling,Gaussian blurring,and feature dimensionality reduction,was conducted to extract useful feature information and recognize and classify rock images using Tensor Flow-based convolutional neural network(CNN)and Py Qt5.A rock image dataset was established and separated into workouts,confirmation sets,and test sets.The framework was subsequently compiled and trained.The categorization approach was evaluated using image data from the validation and test datasets,and key metrics,such as accuracy,precision,and recall,were analyzed.Finally,the classification model conducted a probabilistic analysis of the measured data to determine the equivalent lithological type for each image.The experimental results indicated that the method combining deep learning,Tensor Flow-based CNN,and Py Qt5 to recognize and classify rock images has an accuracy rate of up to 98.8%,and can be successfully utilized for rock image recognition.The system can be extended to geological exploration,mine engineering,and other rock and mineral resource development to more efficiently and accurately recognize rock samples.Moreover,it can match them with the intelligent support design system to effectively improve the reliability and economy of the support scheme.The system can serve as a reference for supporting the design of other mining and underground space projects.展开更多
With the growing advancement of wireless communication technologies,WiFi-based human sensing has gained increasing attention as a non-intrusive and device-free solution.Among the available signal types,Channel State I...With the growing advancement of wireless communication technologies,WiFi-based human sensing has gained increasing attention as a non-intrusive and device-free solution.Among the available signal types,Channel State Information(CSI)offers fine-grained temporal,frequency,and spatial insights into multipath propagation,making it a crucial data source for human-centric sensing.Recently,the integration of deep learning has significantly improved the robustness and automation of feature extraction from CSI in complex environments.This paper provides a comprehensive review of deep learning-enhanced human sensing based on CSI.We first outline mainstream CSI acquisition tools and their hardware specifications,then provide a detailed discussion of preprocessing methods such as denoising,time–frequency transformation,data segmentation,and augmentation.Subsequently,we categorize deep learning approaches according to sensing tasks—namely detection,localization,and recognition—and highlight representative models across application scenarios.Finally,we examine key challenges including domain generalization,multi-user interference,and limited data availability,and we propose future research directions involving lightweight model deployment,multimodal data fusion,and semantic-level sensing.展开更多
Alzheimer’s Disease(AD)is a progressive neurodegenerative disorder that significantly affects cognitive function,making early and accurate diagnosis essential.Traditional Deep Learning(DL)-based approaches often stru...Alzheimer’s Disease(AD)is a progressive neurodegenerative disorder that significantly affects cognitive function,making early and accurate diagnosis essential.Traditional Deep Learning(DL)-based approaches often struggle with low-contrast MRI images,class imbalance,and suboptimal feature extraction.This paper develops a Hybrid DL system that unites MobileNetV2 with adaptive classification methods to boost Alzheimer’s diagnosis by processing MRI scans.Image enhancement is done using Contrast-Limited Adaptive Histogram Equalization(CLAHE)and Enhanced Super-Resolution Generative Adversarial Networks(ESRGAN).A classification robustness enhancement system integrates class weighting techniques and a Matthews Correlation Coefficient(MCC)-based evaluation method into the design.The trained and validated model gives a 98.88%accuracy rate and 0.9614 MCC score.We also performed a 10-fold cross-validation experiment with an average accuracy of 96.52%(±1.51),a loss of 0.1671,and an MCC score of 0.9429 across folds.The proposed framework outperforms the state-of-the-art models with a 98%weighted F1-score while decreasing misdiagnosis results for every AD stage.The model demonstrates apparent separation abilities between AD progression stages according to the results of the confusion matrix analysis.These results validate the effectiveness of hybrid DL models with adaptive preprocessing for early and reliable Alzheimer’s diagnosis,contributing to improved computer-aided diagnosis(CAD)systems in clinical practice.展开更多
Deep learning algorithms have been rapidly incorporated into many different applications due to the increase in computational power and the availability of massive amounts of data.Recently,both deep learning and ensem...Deep learning algorithms have been rapidly incorporated into many different applications due to the increase in computational power and the availability of massive amounts of data.Recently,both deep learning and ensemble learning have been used to recognize underlying structures and patterns from high-level features to make predictions/decisions.With the growth in popularity of deep learning and ensemble learning algorithms,they have received significant attention from both scientists and the industrial community due to their superior ability to learn features from big data.Ensemble deep learning has exhibited significant performance in enhancing learning generalization through the use of multiple deep learning algorithms.Although ensemble deep learning has large quantities of training parameters,which results in time and space overheads,it performs much better than traditional ensemble learning.Ensemble deep learning has been successfully used in several areas,such as bioinformatics,finance,and health care.In this paper,we review and investigate recent ensemble deep learning algorithms and techniques in health care domains,medical imaging,health care data analytics,genomics,diagnosis,disease prevention,and drug discovery.We cover several widely used deep learning algorithms along with their architectures,including deep neural networks(DNNs),convolutional neural networks(CNNs),recurrent neural networks(RNNs),and generative adversarial networks(GANs).Common healthcare tasks,such as medical imaging,electronic health records,and genomics,are also demonstrated.Furthermore,in this review,the challenges inherent in reducing the burden on the healthcare system are discussed and explored.Finally,future directions and opportunities for enhancing healthcare model performance are discussed.展开更多
The precise identification of quartz minerals is crucial in mineralogy and geology due to their widespread occurrence and industrial significance.Traditional methods of quartz identification in thin sections are labor...The precise identification of quartz minerals is crucial in mineralogy and geology due to their widespread occurrence and industrial significance.Traditional methods of quartz identification in thin sections are labor-intensive and require significant expertise,often complicated by the coexistence of other minerals.This study presents a novel approach leveraging deep learning techniques combined with hyperspectral imaging to automate the identification process of quartz minerals.The utilizied four advanced deep learning models—PSPNet,U-Net,FPN,and LinkNet—has significant advancements in efficiency and accuracy.Among these models,PSPNet exhibited superior performance,achieving the highest intersection over union(IoU)scores and demonstrating exceptional reliability in segmenting quartz minerals,even in complex scenarios.The study involved a comprehensive dataset of 120 thin sections,encompassing 2470 hyperspectral images prepared from 20 rock samples.Expert-reviewed masks were used for model training,ensuring robust segmentation results.This automated approach not only expedites the recognition process but also enhances reliability,providing a valuable tool for geologists and advancing the field of mineralogical analysis.展开更多
Aiming at the problem that the bit error rate(BER)of asymmetrically clipped optical orthogonal frequency division multiplexing(ACO-OFDM)space optical communication system is significantly affected by different turbule...Aiming at the problem that the bit error rate(BER)of asymmetrically clipped optical orthogonal frequency division multiplexing(ACO-OFDM)space optical communication system is significantly affected by different turbulence intensities,the deep learning technique is proposed to the polarization code decoding in ACO-OFDM space optical communication system.Moreover,this system realizes the polarization code decoding and signal demodulation without frequency conduction with superior performance and robustness compared with the performance of traditional decoder.Simulations under different turbulence intensities as well as different mapping orders show that the convolutional neural network(CNN)decoder trained under weak-medium-strong turbulence atmospheric channels achieves a performance improvement of about 10^(2)compared to the conventional decoder at 4-quadrature amplitude modulation(4QAM),and the BERs for both 16QAM and 64QAM are in between those of the conventional decoder.展开更多
Automated classification of retinal fundus images is essential for identifying eye diseases,though there is earlier research on applying deep learning models designed especially for detecting tessellation in retinal f...Automated classification of retinal fundus images is essential for identifying eye diseases,though there is earlier research on applying deep learning models designed especially for detecting tessellation in retinal fundus images.This study classifies 4 classes of retinal fundus images with 3 diseased fundus images and 1 normal fundus image,by creating a refined VGG16 model to categorize fundus pictures into tessellated,normal,myopia,and choroidal neovascularization groups.The approach utilizes a VGG16 architecture that has been altered with unique fully connected layers and regularization using dropouts,along with data augmentation techniques(rotation,flip,and rescale)on a dataset of 302 photos.Training involves class weighting and critical callbacks(early halting,learning rate reduction,checkpointing)to maximize performance.Gains in accuracy(93.42%training,77.5%validation)and improved class-specific F1 scores are attained.Grad-CAM’s Explainable AI(XAI)highlights areas of the images that are important for each categorization,making it interpretable for better understanding of medical experts.These results highlight the model’s potential as a helpful diagnostic tool in ophthalmology,providing a clear and practical method for the early identification and categorization of retinal disorders,especially in cases such as tessellated fundus images.展开更多
Introduction Deep learning(DL),as one of the most transformative technologies in artificial intelligence(AI),is undergoing a pivotal transition from laboratory research to industrial deployment.Advancing at an unprece...Introduction Deep learning(DL),as one of the most transformative technologies in artificial intelligence(AI),is undergoing a pivotal transition from laboratory research to industrial deployment.Advancing at an unprecedented pace,DL is transcending theoretical and application boundaries to penetrate emerging realworld scenarios such as industrial automation,urban management,and health monitoring,thereby driving a new wave of intelligent transformation.In August 2023,Goldman Sachs estimated that global AI investment will reach US$200 billion by 2025[1].However,the increasing complexity and dynamic nature of application scenarios expose critical challenges in traditional deep learning,including data heterogeneity,insufficient model generalization,computational resource constraints,and privacy-security trade-offs.The next generation of deep learning methodologies needs to achieve breakthroughs in multimodal fusion,lightweight design,interpretability enhancement,and cross-disciplinary collaborative optimization,in order to develop more efficient,robust,and practically valuable intelligent systems.展开更多
Early identification and treatment of stroke can greatly improve patient outcomes and quality of life.Although clinical tests such as the Cincinnati Pre-hospital Stroke Scale(CPSS)and the Face Arm Speech Test(FAST)are...Early identification and treatment of stroke can greatly improve patient outcomes and quality of life.Although clinical tests such as the Cincinnati Pre-hospital Stroke Scale(CPSS)and the Face Arm Speech Test(FAST)are commonly used for stroke screening,accurate administration is dependent on specialized training.In this study,we proposed a novel multimodal deep learning approach,based on the FAST,for assessing suspected stroke patients exhibiting symptoms such as limb weakness,facial paresis,and speech disorders in acute settings.We collected a dataset comprising videos and audio recordings of emergency room patients performing designated limb movements,facial expressions,and speech tests based on the FAST.We compared the constructed deep learning model,which was designed to process multi-modal datasets,with six prior models that achieved good action classification performance,including the I3D,SlowFast,X3D,TPN,TimeSformer,and MViT.We found that the findings of our deep learning model had a higher clinical value compared with the other approaches.Moreover,the multi-modal model outperformed its single-module variants,highlighting the benefit of utilizing multiple types of patient data,such as action videos and speech audio.These results indicate that a multi-modal deep learning model combined with the FAST could greatly improve the accuracy and sensitivity of early stroke identification of stroke,thus providing a practical and powerful tool for assessing stroke patients in an emergency clinical setting.展开更多
Computational solid mechanics has become an indispensable approach in engineering,and numerical investigation of fracturing in composites is essential,as composites are widely used in structural applications.Crack evo...Computational solid mechanics has become an indispensable approach in engineering,and numerical investigation of fracturing in composites is essential,as composites are widely used in structural applications.Crack evolution in composites is the path to elucidating the relationship between microstructures and fracture performance,but crack-based finite-element methods are computationally expensive and time-consuming,which limits their application in computation-intensive scenarios.Consequently,this study proposes a deep learning framework called Crack-Net for instant prediction of the dynamic crack growth process,as well as its strain-stress curve.Specifically,Crack-Net introduces an implicit constraint technique,which incorporates the relationship between crack evolution and stress response into the network architecture.This technique substantially reduces data requirements while improving predictive accuracy.The transfer learning technique enables Crack-Net to handle composite materials with reinforcements of different strengths.Trained on high-accuracy fracture development datasets from phase field simulations,the proposed framework is capable of tackling intricate scenarios,involving materials with diverse interfaces,varying initial conditions,and the intricate elastoplastic fracture process.The proposed Crack-Net holds great promise for practical applications in engineering and materials science,in which accurate and efficient fracture prediction is crucial for optimizing material performance and microstructural design.展开更多
Automatic detection of Leukemia or blood cancer is one of the most challenging tasks that need to be addressed in the healthcare system.Analysis of white blood cells(WBCs)in the blood or bone marrow microscopic slide ...Automatic detection of Leukemia or blood cancer is one of the most challenging tasks that need to be addressed in the healthcare system.Analysis of white blood cells(WBCs)in the blood or bone marrow microscopic slide images play a crucial part in early identification to facilitate medical experts.For Acute Lymphocytic Leukemia(ALL),the most preferred part of the blood or marrow is to be analyzed by the experts before it spreads in the whole body and the condition becomes worse.The researchers have done a lot of work in this field,to demonstrate a comprehensive analysis few literature reviews have been published focusing on various artificial intelligence-based techniques like machine and deep learning detection of ALL.The systematic review has been done in this article under the PRISMA guidelines which presents the most recent advancements in this field.Different image segmentation techniques were broadly studied and categorized from various online databases like Google Scholar,Science Direct,and PubMed as image processing-based,traditional machine and deep learning-based,and advanced deep learning-based models were presented.Convolutional Neural Networks(CNN)based on traditional models and then the recent advancements in CNN used for the classification of ALL into its subtypes.A critical analysis of the existing methods is provided to offer clarity on the current state of the field.Finally,the paper concludes with insights and suggestions for future research,aiming to guide new researchers in the development of advanced automated systems for detecting life-threatening diseases.展开更多
The accurate identification of microporosity is crucial for the characterization of hydrocarbon reservoir permeability and production.Scanning electron microscopy(SEM)is among the limited number of methods available t...The accurate identification of microporosity is crucial for the characterization of hydrocarbon reservoir permeability and production.Scanning electron microscopy(SEM)is among the limited number of methods available to directly observe the microscopic structure of the hydrocarbon reservoir rocks.Nevertheless,precise segmentation of microscopic pores at different depths in SEM images remains an unsolved challenge,known as the‘depth-related resolution loss'problem.Therefore,in this study,a 3D reconstruction technique for regions of interest(ROI)was developed for in-depth pixel analysis and differentiation among various depths of SEM images.The processed SEM images,together with the processing outcomes of this technique,were used as the input database to train a stochastic depth with multi-channel residual pathways(SdstMcrp)deep learning model programmed in Python to develop a tool for segmenting the microscopic pore spaces in SEM images obtained from the Beibuwan Basin.The more accurate segmentation helped to detect an average of 1.2 times more microporosity in SEM images,accounting for about 1.6 times more pixels and 1.2 times more pore surface area.Finally,the impact of the accurate segmentation on the calculation of permeability,a significant reservoir production property,was investigated using fractal geometry models and sensitivity analysis.The results showed that the obtained permeability values would vary by a factor of 6,which represents a considerable difference.These findings demonstrate that the proposed models can effectively identify features across a wide range of grayscale values in SEM images.展开更多
基金supported by the Basic Science Research Program(2023R1A2C3004336,RS-202300243807)&Regional Leading Research Center(RS-202400405278)through the National Research Foundation of Korea(NRF)grant funded by the Korea Government(MSIT)。
文摘Wearable sensors integrated with deep learning techniques have the potential to revolutionize seamless human-machine interfaces for real-time health monitoring,clinical diagnosis,and robotic applications.Nevertheless,it remains a critical challenge to simultaneously achieve desirable mechanical and electrical performance along with biocompatibility,adhesion,self-healing,and environmental robustness with excellent sensing metrics.Herein,we report a multifunctional,anti-freezing,selfadhesive,and self-healable organogel pressure sensor composed of cobalt nanoparticle encapsulated nitrogen-doped carbon nanotubes(CoN CNT)embedded in a polyvinyl alcohol-gelatin(PVA/GLE)matrix.Fabricated using a binary solvent system of water and ethylene glycol(EG),the CoN CNT/PVA/GLE organogel exhibits excellent flexibility,biocompatibility,and temperature tolerance with remarkable environmental stability.Electrochemical impedance spectroscopy confirms near-stable performance across a broad humidity range(40%-95%RH).Freeze-tolerant conductivity under sub-zero conditions(-20℃)is attributed to the synergistic role of CoN CNT and EG,preserving mobility and network integrity.The Co N CNT/PVA/GLE organogel sensor exhibits high sensitivity of 5.75 k Pa^(-1)in the detection range from 0 to 20 k Pa,ideal for subtle biomechanical motion detection.A smart human-machine interface for English letter recognition using deep learning achieved 98%accuracy.The organogel sensor utility was extended to detect human gestures like finger bending,wrist motion,and throat vibration during speech.
文摘Modern manufacturing processes have become more reliant on automation because of the accelerated transition from Industry 3.0 to Industry 4.0.Manual inspection of products on assembly lines remains inefficient,prone to errors and lacks consistency,emphasizing the need for a reliable and automated inspection system.Leveraging both object detection and image segmentation approaches,this research proposes a vision-based solution for the detection of various kinds of tools in the toolkit using deep learning(DL)models.Two Intel RealSense D455f depth cameras were arranged in a top down configuration to capture both RGB and depth images of the toolkits.After applying multiple constraints and enhancing them through preprocessing and augmentation,a dataset consisting of 3300 annotated RGB-D photos was generated.Several DL models were selected through a comprehensive assessment of mean Average Precision(mAP),precision-recall equilibrium,inference latency(target≥30 FPS),and computational burden,resulting in a preference for YOLO and Region-based Convolutional Neural Networks(R-CNN)variants over ViT-based models due to the latter’s increased latency and resource requirements.YOLOV5,YOLOV8,YOLOV11,Faster R-CNN,and Mask R-CNN were trained on the annotated dataset and evaluated using key performance metrics(Recall,Accuracy,F1-score,and Precision).YOLOV11 demonstrated balanced excellence with 93.0%precision,89.9%recall,and a 90.6%F1-score in object detection,as well as 96.9%precision,95.3%recall,and a 96.5%F1-score in instance segmentation with an average inference time of 25 ms per frame(≈40 FPS),demonstrating real-time performance.Leveraging these results,a YOLOV11-based windows application was successfully deployed in a real-time assembly line environment,where it accurately processed live video streams to detect and segment tools within toolkits,demonstrating its practical effectiveness in industrial automation.The application is capable of precisely measuring socket dimensions by utilising edge detection techniques on YOLOv11 segmentation masks,in addition to detection and segmentation.This makes it possible to do specification-level quality control right on the assembly line,which improves the ability to examine things in real time.The implementation is a big step forward for intelligent manufacturing in the Industry 4.0 paradigm.It provides a scalable,efficient,and accurate way to do automated inspection and dimensional verification activities.
文摘Underwater pipeline inspection plays a vital role in the proactive maintenance and management of critical marine infrastructure and subaquatic systems.However,the inspection of underwater pipelines presents a challenge due to factors such as light scattering,absorption,restricted visibility,and ambient noise.The advancement of deep learning has introduced powerful techniques for processing large amounts of unstructured and imperfect data collected from underwater environments.This study evaluated the efficacy of the You Only Look Once(YOLO)algorithm,a real-time object detection and localization model based on convolutional neural networks,in identifying and classifying various types of pipeline defects in underwater settings.YOLOv8,the latest evolution in the YOLO family,integrates advanced capabilities,such as anchor-free detection,a cross-stage partial network backbone for efficient feature extraction,and a feature pyramid network+path aggregation network neck for robust multi-scale object detection,which make it particularly well-suited for complex underwater environments.Due to the lack of suitable open-access datasets for underwater pipeline defects,a custom dataset was captured using a remotely operated vehicle in a controlled environment.This application has the following assets available for use.Extensive experimentation demonstrated that YOLOv8 X-Large consistently outperformed other models in terms of pipe defect detection and classification and achieved a strong balance between precision and recall in identifying pipeline cracks,rust,corners,defective welds,flanges,tapes,and holes.This research establishes the baseline performance of YOLOv8 for underwater defect detection and showcases its potential to enhance the reliability and efficiency of pipeline inspection tasks in challenging underwater environments.
文摘The rapid growth of biomedical data,particularly multi-omics data including genomes,transcriptomics,proteomics,metabolomics,and epigenomics,medical research and clinical decision-making confront both new opportunities and obstacles.The huge and diversified nature of these datasets cannot always be managed using traditional data analysis methods.As a consequence,deep learning has emerged as a strong tool for analysing numerous omics data due to its ability to handle complex and non-linear relationships.This paper explores the fundamental concepts of deep learning and how they are used in multi-omics medical data mining.We demonstrate how autoencoders,variational autoencoders,multimodal models,attention mechanisms,transformers,and graph neural networks enable pattern analysis and recognition across all omics data.Deep learning has been found to be effective in illness classification,biomarker identification,gene network learning,and therapeutic efficacy prediction.We also consider critical problems like as data quality,model explainability,whether findings can be repeated,and computational power requirements.We now consider future elements of combining omics with clinical and imaging data,explainable AI,federated learning,and real-time diagnostics.Overall,this study emphasises the need of collaborating across disciplines to advance deep learning-based multi-omics research for precision medicine and comprehending complicated disorders.
文摘This systematic review aims to comprehensively examine and compare deep learning methods for brain tumor segmentation and classification using MRI and other imaging modalities,focusing on recent trends from 2022 to 2025.The primary objective is to evaluate methodological advancements,model performance,dataset usage,and existing challenges in developing clinically robust AI systems.We included peer-reviewed journal articles and highimpact conference papers published between 2022 and 2025,written in English,that proposed or evaluated deep learning methods for brain tumor segmentation and/or classification.Excluded were non-open-access publications,books,and non-English articles.A structured search was conducted across Scopus,Google Scholar,Wiley,and Taylor&Francis,with the last search performed in August 2025.Risk of bias was not formally quantified but considered during full-text screening based on dataset diversity,validation methods,and availability of performance metrics.We used narrative synthesis and tabular benchmarking to compare performance metrics(e.g.,accuracy,Dice score)across model types(CNN,Transformer,Hybrid),imaging modalities,and datasets.A total of 49 studies were included(43 journal articles and 6 conference papers).These studies spanned over 9 public datasets(e.g.,BraTS,Figshare,REMBRANDT,MOLAB)and utilized a range of imaging modalities,predominantly MRI.Hybrid models,especially ResViT and UNetFormer,consistently achieved high performance,with classification accuracy exceeding 98%and segmentation Dice scores above 0.90 across multiple studies.Transformers and hybrid architectures showed increasing adoption post2023.Many studies lacked external validation and were evaluated only on a few benchmark datasets,raising concerns about generalizability and dataset bias.Few studies addressed clinical interpretability or uncertainty quantification.Despite promising results,particularly for hybrid deep learning models,widespread clinical adoption remains limited due to lack of validation,interpretability concerns,and real-world deployment barriers.
基金supported by the National Natural Science Foundation of China[grant number 62376217]the Young Elite Scientists Sponsorship Program by CAST[grant number 2023QNRC001]the Joint Research Project for Meteorological Capacity Improvement[grant number 24NLTSZ003]。
文摘Deep learning-based methods have become alternatives to traditional numerical weather prediction systems,offering faster computation and the ability to utilize large historical datasets.However,the application of deep learning to medium-range regional weather forecasting with limited data remains a significant challenge.In this work,three key solutions are proposed:(1)motivated by the need to improve model performance in data-scarce regional forecasting scenarios,the authors innovatively apply semantic segmentation models,to better capture spatiotemporal features and improve prediction accuracy;(2)recognizing the challenge of overfitting and the inability of traditional noise-based data augmentation methods to effectively enhance model robustness,a novel learnable Gaussian noise mechanism is introduced that allows the model to adaptively optimize perturbations for different locations,ensuring more effective learning;and(3)to address the issue of error accumulation in autoregressive prediction,as well as the challenge of learning difficulty and the lack of intermediate data utilization in one-shot prediction,the authors propose a cascade prediction approach that effectively resolves these problems while significantly improving model forecasting performance.The method achieves a competitive result in The East China Regional AI Medium Range Weather Forecasting Competition.Ablation experiments further validate the effectiveness of each component,highlighting their contributions to enhancing prediction performance.
文摘Honeycombing Lung(HCL)is a chronic lung condition marked by advanced fibrosis,resulting in enlarged air spaces with thick fibrotic walls,which are visible on Computed Tomography(CT)scans.Differentiating between normal lung tissue,honeycombing lungs,and Ground Glass Opacity(GGO)in CT images is often challenging for radiologists and may lead to misinterpretations.Although earlier studies have proposed models to detect and classify HCL,many faced limitations such as high computational demands,lower accuracy,and difficulty distinguishing between HCL and GGO.CT images are highly effective for lung classification due to their high resolution,3D visualization,and sensitivity to tissue density variations.This study introduces Honeycombing Lungs Network(HCL Net),a novel classification algorithm inspired by ResNet50V2 and enhanced to overcome the shortcomings of previous approaches.HCL Net incorporates additional residual blocks,refined preprocessing techniques,and selective parameter tuning to improve classification performance.The dataset,sourced from the University Malaya Medical Centre(UMMC)and verified by expert radiologists,consists of CT images of normal,honeycombing,and GGO lungs.Experimental evaluations across five assessments demonstrated that HCL Net achieved an outstanding classification accuracy of approximately 99.97%.It also recorded strong performance in other metrics,achieving 93%precision,100%sensitivity,89%specificity,and an AUC-ROC score of 97%.Comparative analysis with baseline feature engineering methods confirmed the superior efficacy of HCL Net.The model significantly reduces misclassification,particularly between honeycombing and GGO lungs,enhancing diagnostic precision and reliability in lung image analysis.
基金funded by Ongoing Research Funding Program for Project number(ORF-2025-648),King Saud University,Riyadh,Saudi Arabia.
文摘Heart disease remains a leading cause of mortality worldwide,emphasizing the urgent need for reliable and interpretable predictive models to support early diagnosis and timely intervention.However,existing Deep Learning(DL)approaches often face several limitations,including inefficient feature extraction,class imbalance,suboptimal classification performance,and limited interpretability,which collectively hinder their deployment in clinical settings.To address these challenges,we propose a novel DL framework for heart disease prediction that integrates a comprehensive preprocessing pipeline with an advanced classification architecture.The preprocessing stage involves label encoding and feature scaling.To address the issue of class imbalance inherent in the personal key indicators of the heart disease dataset,the localized random affine shadowsampling technique is employed,which enhances minority class representation while minimizing overfitting.At the core of the framework lies the Deep Residual Network(DeepResNet),which employs hierarchical residual transformations to facilitate efficient feature extraction and capture complex,non-linear relationships in the data.Experimental results demonstrate that the proposed model significantly outperforms existing techniques,achieving improvements of 3.26%in accuracy,3.16%in area under the receiver operating characteristics,1.09%in recall,and 1.07%in F1-score.Furthermore,robustness is validated using 10-fold crossvalidation,confirming the model’s generalizability across diverse data distributions.Moreover,model interpretability is ensured through the integration of Shapley additive explanations and local interpretable model-agnostic explanations,offering valuable insights into the contribution of individual features to model predictions.Overall,the proposed DL framework presents a robust,interpretable,and clinically applicable solution for heart disease prediction.
基金financially supported by the National Science and Technology Major Project——Deep Earth Probe and Mineral Resources Exploration(No.2024ZD1003701)the National Key R&D Program of China(No.2022YFC2905004)。
文摘An image processing and deep learning method for identifying different types of rock images was proposed.Preprocessing,such as rock image acquisition,gray scaling,Gaussian blurring,and feature dimensionality reduction,was conducted to extract useful feature information and recognize and classify rock images using Tensor Flow-based convolutional neural network(CNN)and Py Qt5.A rock image dataset was established and separated into workouts,confirmation sets,and test sets.The framework was subsequently compiled and trained.The categorization approach was evaluated using image data from the validation and test datasets,and key metrics,such as accuracy,precision,and recall,were analyzed.Finally,the classification model conducted a probabilistic analysis of the measured data to determine the equivalent lithological type for each image.The experimental results indicated that the method combining deep learning,Tensor Flow-based CNN,and Py Qt5 to recognize and classify rock images has an accuracy rate of up to 98.8%,and can be successfully utilized for rock image recognition.The system can be extended to geological exploration,mine engineering,and other rock and mineral resource development to more efficiently and accurately recognize rock samples.Moreover,it can match them with the intelligent support design system to effectively improve the reliability and economy of the support scheme.The system can serve as a reference for supporting the design of other mining and underground space projects.
基金supported by National Natural Science Foundation of China(NSFC)under grant U23A20310.
文摘With the growing advancement of wireless communication technologies,WiFi-based human sensing has gained increasing attention as a non-intrusive and device-free solution.Among the available signal types,Channel State Information(CSI)offers fine-grained temporal,frequency,and spatial insights into multipath propagation,making it a crucial data source for human-centric sensing.Recently,the integration of deep learning has significantly improved the robustness and automation of feature extraction from CSI in complex environments.This paper provides a comprehensive review of deep learning-enhanced human sensing based on CSI.We first outline mainstream CSI acquisition tools and their hardware specifications,then provide a detailed discussion of preprocessing methods such as denoising,time–frequency transformation,data segmentation,and augmentation.Subsequently,we categorize deep learning approaches according to sensing tasks—namely detection,localization,and recognition—and highlight representative models across application scenarios.Finally,we examine key challenges including domain generalization,multi-user interference,and limited data availability,and we propose future research directions involving lightweight model deployment,multimodal data fusion,and semantic-level sensing.
基金funded by the Deanship of Graduate Studies and Scientific Research at Jouf University under grant No.(DGSSR-2025-02-01295).
文摘Alzheimer’s Disease(AD)is a progressive neurodegenerative disorder that significantly affects cognitive function,making early and accurate diagnosis essential.Traditional Deep Learning(DL)-based approaches often struggle with low-contrast MRI images,class imbalance,and suboptimal feature extraction.This paper develops a Hybrid DL system that unites MobileNetV2 with adaptive classification methods to boost Alzheimer’s diagnosis by processing MRI scans.Image enhancement is done using Contrast-Limited Adaptive Histogram Equalization(CLAHE)and Enhanced Super-Resolution Generative Adversarial Networks(ESRGAN).A classification robustness enhancement system integrates class weighting techniques and a Matthews Correlation Coefficient(MCC)-based evaluation method into the design.The trained and validated model gives a 98.88%accuracy rate and 0.9614 MCC score.We also performed a 10-fold cross-validation experiment with an average accuracy of 96.52%(±1.51),a loss of 0.1671,and an MCC score of 0.9429 across folds.The proposed framework outperforms the state-of-the-art models with a 98%weighted F1-score while decreasing misdiagnosis results for every AD stage.The model demonstrates apparent separation abilities between AD progression stages according to the results of the confusion matrix analysis.These results validate the effectiveness of hybrid DL models with adaptive preprocessing for early and reliable Alzheimer’s diagnosis,contributing to improved computer-aided diagnosis(CAD)systems in clinical practice.
基金funded by Taif University,Saudi Arabia,project No.(TU-DSPP-2024-263).
文摘Deep learning algorithms have been rapidly incorporated into many different applications due to the increase in computational power and the availability of massive amounts of data.Recently,both deep learning and ensemble learning have been used to recognize underlying structures and patterns from high-level features to make predictions/decisions.With the growth in popularity of deep learning and ensemble learning algorithms,they have received significant attention from both scientists and the industrial community due to their superior ability to learn features from big data.Ensemble deep learning has exhibited significant performance in enhancing learning generalization through the use of multiple deep learning algorithms.Although ensemble deep learning has large quantities of training parameters,which results in time and space overheads,it performs much better than traditional ensemble learning.Ensemble deep learning has been successfully used in several areas,such as bioinformatics,finance,and health care.In this paper,we review and investigate recent ensemble deep learning algorithms and techniques in health care domains,medical imaging,health care data analytics,genomics,diagnosis,disease prevention,and drug discovery.We cover several widely used deep learning algorithms along with their architectures,including deep neural networks(DNNs),convolutional neural networks(CNNs),recurrent neural networks(RNNs),and generative adversarial networks(GANs).Common healthcare tasks,such as medical imaging,electronic health records,and genomics,are also demonstrated.Furthermore,in this review,the challenges inherent in reducing the burden on the healthcare system are discussed and explored.Finally,future directions and opportunities for enhancing healthcare model performance are discussed.
文摘The precise identification of quartz minerals is crucial in mineralogy and geology due to their widespread occurrence and industrial significance.Traditional methods of quartz identification in thin sections are labor-intensive and require significant expertise,often complicated by the coexistence of other minerals.This study presents a novel approach leveraging deep learning techniques combined with hyperspectral imaging to automate the identification process of quartz minerals.The utilizied four advanced deep learning models—PSPNet,U-Net,FPN,and LinkNet—has significant advancements in efficiency and accuracy.Among these models,PSPNet exhibited superior performance,achieving the highest intersection over union(IoU)scores and demonstrating exceptional reliability in segmenting quartz minerals,even in complex scenarios.The study involved a comprehensive dataset of 120 thin sections,encompassing 2470 hyperspectral images prepared from 20 rock samples.Expert-reviewed masks were used for model training,ensuring robust segmentation results.This automated approach not only expedites the recognition process but also enhances reliability,providing a valuable tool for geologists and advancing the field of mineralogical analysis.
基金supported by the National Natural Science Foundation of China(No.12104141).
文摘Aiming at the problem that the bit error rate(BER)of asymmetrically clipped optical orthogonal frequency division multiplexing(ACO-OFDM)space optical communication system is significantly affected by different turbulence intensities,the deep learning technique is proposed to the polarization code decoding in ACO-OFDM space optical communication system.Moreover,this system realizes the polarization code decoding and signal demodulation without frequency conduction with superior performance and robustness compared with the performance of traditional decoder.Simulations under different turbulence intensities as well as different mapping orders show that the convolutional neural network(CNN)decoder trained under weak-medium-strong turbulence atmospheric channels achieves a performance improvement of about 10^(2)compared to the conventional decoder at 4-quadrature amplitude modulation(4QAM),and the BERs for both 16QAM and 64QAM are in between those of the conventional decoder.
基金support from the"Intelligent Recognition Industry Service Center"as part of the Featured Areas Research Center Program under the Higher Education Sprout Project by the Ministry of Education(MOE)in Taiwan,and the National Science and Technology Council,Taiwan,under grants[113-2622-E-224-002]and[113-2221-E-224-041]support was provided by Isuzu Optics Corporation.
文摘Automated classification of retinal fundus images is essential for identifying eye diseases,though there is earlier research on applying deep learning models designed especially for detecting tessellation in retinal fundus images.This study classifies 4 classes of retinal fundus images with 3 diseased fundus images and 1 normal fundus image,by creating a refined VGG16 model to categorize fundus pictures into tessellated,normal,myopia,and choroidal neovascularization groups.The approach utilizes a VGG16 architecture that has been altered with unique fully connected layers and regularization using dropouts,along with data augmentation techniques(rotation,flip,and rescale)on a dataset of 302 photos.Training involves class weighting and critical callbacks(early halting,learning rate reduction,checkpointing)to maximize performance.Gains in accuracy(93.42%training,77.5%validation)and improved class-specific F1 scores are attained.Grad-CAM’s Explainable AI(XAI)highlights areas of the images that are important for each categorization,making it interpretable for better understanding of medical experts.These results highlight the model’s potential as a helpful diagnostic tool in ophthalmology,providing a clear and practical method for the early identification and categorization of retinal disorders,especially in cases such as tessellated fundus images.
基金supported in part by Guangdong Basic and Applied Basic Research Foundation under Grant 2024A1515012485in part by Shenzhen Fundamental Research Program under Grant JCYJ20220810112354002+4 种基金in part by Shenzhen Science and Technology Program under Grant KJZD20230923114111021in part by the Fund for Academic Innovation Teams and Research Platform of South-Central Minzu University under Grant XTZ24003 and Grant PTZ24001in part by the Knowledge Innovation Program of Wuhan-Basic Research through Project 2023010201010151in part by the Research Start-up Funds of South-Central Minzu University under Grant YZZ18006in part by the Spring Sunshine Program of Ministry of Education of the People’s Republic of China under Grant HZKY20220331.
文摘Introduction Deep learning(DL),as one of the most transformative technologies in artificial intelligence(AI),is undergoing a pivotal transition from laboratory research to industrial deployment.Advancing at an unprecedented pace,DL is transcending theoretical and application boundaries to penetrate emerging realworld scenarios such as industrial automation,urban management,and health monitoring,thereby driving a new wave of intelligent transformation.In August 2023,Goldman Sachs estimated that global AI investment will reach US$200 billion by 2025[1].However,the increasing complexity and dynamic nature of application scenarios expose critical challenges in traditional deep learning,including data heterogeneity,insufficient model generalization,computational resource constraints,and privacy-security trade-offs.The next generation of deep learning methodologies needs to achieve breakthroughs in multimodal fusion,lightweight design,interpretability enhancement,and cross-disciplinary collaborative optimization,in order to develop more efficient,robust,and practically valuable intelligent systems.
基金supported by the Ministry of Science and Technology of China,No.2020AAA0109605(to XL)Meizhou Major Scientific and Technological Innovation PlatformsProjects of Guangdong Provincial Science & Technology Plan Projects,No.2019A0102005(to HW).
文摘Early identification and treatment of stroke can greatly improve patient outcomes and quality of life.Although clinical tests such as the Cincinnati Pre-hospital Stroke Scale(CPSS)and the Face Arm Speech Test(FAST)are commonly used for stroke screening,accurate administration is dependent on specialized training.In this study,we proposed a novel multimodal deep learning approach,based on the FAST,for assessing suspected stroke patients exhibiting symptoms such as limb weakness,facial paresis,and speech disorders in acute settings.We collected a dataset comprising videos and audio recordings of emergency room patients performing designated limb movements,facial expressions,and speech tests based on the FAST.We compared the constructed deep learning model,which was designed to process multi-modal datasets,with six prior models that achieved good action classification performance,including the I3D,SlowFast,X3D,TPN,TimeSformer,and MViT.We found that the findings of our deep learning model had a higher clinical value compared with the other approaches.Moreover,the multi-modal model outperformed its single-module variants,highlighting the benefit of utilizing multiple types of patient data,such as action videos and speech audio.These results indicate that a multi-modal deep learning model combined with the FAST could greatly improve the accuracy and sensitivity of early stroke identification of stroke,thus providing a practical and powerful tool for assessing stroke patients in an emergency clinical setting.
基金supported and partially funded by the National Natural Science Foundation of China(52288101)the China Postdoctoral Science Foundation(2024M761535)supported by the High Performance Computing Centers at Eastern Institute of Technology,Ningbo,and Ningbo Institute of Digital Twin.
文摘Computational solid mechanics has become an indispensable approach in engineering,and numerical investigation of fracturing in composites is essential,as composites are widely used in structural applications.Crack evolution in composites is the path to elucidating the relationship between microstructures and fracture performance,but crack-based finite-element methods are computationally expensive and time-consuming,which limits their application in computation-intensive scenarios.Consequently,this study proposes a deep learning framework called Crack-Net for instant prediction of the dynamic crack growth process,as well as its strain-stress curve.Specifically,Crack-Net introduces an implicit constraint technique,which incorporates the relationship between crack evolution and stress response into the network architecture.This technique substantially reduces data requirements while improving predictive accuracy.The transfer learning technique enables Crack-Net to handle composite materials with reinforcements of different strengths.Trained on high-accuracy fracture development datasets from phase field simulations,the proposed framework is capable of tackling intricate scenarios,involving materials with diverse interfaces,varying initial conditions,and the intricate elastoplastic fracture process.The proposed Crack-Net holds great promise for practical applications in engineering and materials science,in which accurate and efficient fracture prediction is crucial for optimizing material performance and microstructural design.
基金supported by Institute of Information&Communications Technology Planning&Evaluation(IITP)grant funded by the Korea government(MSIT)(RS-2024-00460621,Developing BCI-Based Digital Health Technologies for Mental Illness and Pain Management).
文摘Automatic detection of Leukemia or blood cancer is one of the most challenging tasks that need to be addressed in the healthcare system.Analysis of white blood cells(WBCs)in the blood or bone marrow microscopic slide images play a crucial part in early identification to facilitate medical experts.For Acute Lymphocytic Leukemia(ALL),the most preferred part of the blood or marrow is to be analyzed by the experts before it spreads in the whole body and the condition becomes worse.The researchers have done a lot of work in this field,to demonstrate a comprehensive analysis few literature reviews have been published focusing on various artificial intelligence-based techniques like machine and deep learning detection of ALL.The systematic review has been done in this article under the PRISMA guidelines which presents the most recent advancements in this field.Different image segmentation techniques were broadly studied and categorized from various online databases like Google Scholar,Science Direct,and PubMed as image processing-based,traditional machine and deep learning-based,and advanced deep learning-based models were presented.Convolutional Neural Networks(CNN)based on traditional models and then the recent advancements in CNN used for the classification of ALL into its subtypes.A critical analysis of the existing methods is provided to offer clarity on the current state of the field.Finally,the paper concludes with insights and suggestions for future research,aiming to guide new researchers in the development of advanced automated systems for detecting life-threatening diseases.
基金the Natural Science Foundation of Shandong Province of China(Nos.ZR2022QD080 , ZR2025MS575)the National Natural Science Foundation of China(Nos.W25322063,42250410333,52250410357)+1 种基金the Fundamental Research Funds for the Central Universities,CHD(No.300102263103)the Young Talent Fund of Association for Science and Technology in Shaanxi,China(No.20230703)。
文摘The accurate identification of microporosity is crucial for the characterization of hydrocarbon reservoir permeability and production.Scanning electron microscopy(SEM)is among the limited number of methods available to directly observe the microscopic structure of the hydrocarbon reservoir rocks.Nevertheless,precise segmentation of microscopic pores at different depths in SEM images remains an unsolved challenge,known as the‘depth-related resolution loss'problem.Therefore,in this study,a 3D reconstruction technique for regions of interest(ROI)was developed for in-depth pixel analysis and differentiation among various depths of SEM images.The processed SEM images,together with the processing outcomes of this technique,were used as the input database to train a stochastic depth with multi-channel residual pathways(SdstMcrp)deep learning model programmed in Python to develop a tool for segmenting the microscopic pore spaces in SEM images obtained from the Beibuwan Basin.The more accurate segmentation helped to detect an average of 1.2 times more microporosity in SEM images,accounting for about 1.6 times more pixels and 1.2 times more pore surface area.Finally,the impact of the accurate segmentation on the calculation of permeability,a significant reservoir production property,was investigated using fractal geometry models and sensitivity analysis.The results showed that the obtained permeability values would vary by a factor of 6,which represents a considerable difference.These findings demonstrate that the proposed models can effectively identify features across a wide range of grayscale values in SEM images.