Neural networks with physical governing equations as constraints have recently created a new trend in machine learning research.In this context,a review of related research is first presented and discussed.The potenti...Neural networks with physical governing equations as constraints have recently created a new trend in machine learning research.In this context,a review of related research is first presented and discussed.The potential offered by such physics-informed deep learning models for computations in geomechanics is demonstrated by application to one-dimensional(1D)consolidation.The governing equation for 1D problems is applied as a constraint in the deep learning model.The deep learning model relies on automatic differentiation for applying the governing equation as a constraint,based on the mathematical approximations established by the neural network.The total loss is measured as a combination of the training loss(based on analytical and model predicted solutions)and the constraint loss(a requirement to satisfy the governing equation).Two classes of problems are considered:forward and inverse problems.The forward problems demonstrate the performance of a physically constrained neural network model in predicting solutions for 1D consolidation problems.Inverse problems show prediction of the coefficient of consolidation.Terzaghi’s problem,with varying boundary conditions,is used as a numerical example and the deep learning model shows a remarkable performance in both the forward and inverse problems.While the application demonstrated here is a simple 1D consolidation problem,such a deep learning model integrated with a physical law has significant implications for use in,such as,faster realtime numerical prediction for digital twins,numerical model reproducibility and constitutive model parameter optimization.展开更多
Physics-informed deep learning has drawn tremendous interest in recent years to solve computational physics problems,whose basic concept is to embed physical laws to constrain/inform neural networks,with the need of l...Physics-informed deep learning has drawn tremendous interest in recent years to solve computational physics problems,whose basic concept is to embed physical laws to constrain/inform neural networks,with the need of less data for training a reliable model.This can be achieved by incorporating the residual of physics equations into the loss function.Through minimizing the loss function,the network could approximate the solution.In this paper,we propose a mixed-variable scheme of physics-informed neural network(PINN)for fluid dynamics and apply it to simulate steady and transient laminar flows at low Reynolds numbers.A parametric study indicates that the mixed-variable scheme can improve the PINN trainability and the solution accuracy.The predicted velocity and pressure fields by the proposed PINN approach are also compared with the reference numerical solutions.Simulation results demonstrate great potential of the proposed PINN for fluid flow simulation with a high accuracy.展开更多
In this work,a physics-informed neural network(PINN)designed specifically for analyzing digital mate-rials is introduced.This proposed machine learning(ML)model can be trained free of ground truth data by adopting the...In this work,a physics-informed neural network(PINN)designed specifically for analyzing digital mate-rials is introduced.This proposed machine learning(ML)model can be trained free of ground truth data by adopting the minimum energy criteria as its loss function.Results show that our energy-based PINN reaches similar accuracy as supervised ML models.Adding a hinge loss on the Jacobian can constrain the model to avoid erroneous deformation gradient caused by the nonlinear logarithmic strain.Lastly,we discuss how the strain energy of each material element at each numerical integration point can be calculated parallelly on a GPU.The algorithm is tested on different mesh densities to evaluate its com-putational efficiency which scales linearly with respect to the number of nodes in the system.This work provides a foundation for encoding physical behaviors of digital materials directly into neural networks,enabling label-free learning for the design of next-generation composites.展开更多
Accurate traffic forecasting is crucial for understanding and managing congestion for effi-cient transportation planning.However,conventional approaches often neglect epistemic uncertainty,which arises from incomplete...Accurate traffic forecasting is crucial for understanding and managing congestion for effi-cient transportation planning.However,conventional approaches often neglect epistemic uncertainty,which arises from incomplete knowledge across different spatiotemporal scales.This study addresses this challenge by introducing a novel methodology to establish dynamic spatiotemporal correlations that captures the unobserved heterogeneity in travel time through distinct peaks in probability density functions,guided by physics-based prin-ciples.We propose an innovative approach to modifying both prediction and correction steps of the Kalman filter(KF)algorithm by leveraging established spatiotemporal correla-tions.Central to our approach is the development of a novel deep learning(DL)model called the physics informed-graph convolutional gated recurrent neural network(PI-GRNN).Functioning as the state-space model within the KF,the PI-GRNN exploits estab-lished correlations to construct dynamic adjacency matrices that utilize the inherent struc-ture and relationships within the transportation network to capture sequential patterns and dependencies over time.Furthermore,our methodology integrates insights gained from correlations into the correction step of the KF algorithm that helps in enhancing its correctional capabilities.This integrated approach proves instrumental in alleviating the inherent model drift associated with data-driven methods,as periodic corrections through update step of KF refine the predictions generated by the PI-GRNN.To the best of our knowledge,this study represents a pioneering effort in integrating DL and KF algorithms in this unique symbiotic manner.Through extensive experimentation with real-world traf-fic data,we demonstrate the superior performance of our model compared to the bench-mark approaches.展开更多
The solar cycle(SC),a phenomenon caused by the quasi-periodic regular activities in the Sun,occurs approximately every 11 years.Intense solar activity can disrupt the Earth’s ionosphere,affecting communication and na...The solar cycle(SC),a phenomenon caused by the quasi-periodic regular activities in the Sun,occurs approximately every 11 years.Intense solar activity can disrupt the Earth’s ionosphere,affecting communication and navigation systems.Consequently,accurately predicting the intensity of the SC holds great significance,but predicting the SC involves a long-term time series,and many existing time series forecasting methods have fallen short in terms of accuracy and efficiency.The Time-series Dense Encoder model is a deep learning solution tailored for long time series prediction.Based on a multi-layer perceptron structure,it outperforms the best previously existing models in accuracy,while being efficiently trainable on general datasets.We propose a method based on this model for SC forecasting.Using a trained model,we predict the test set from SC 19 to SC 25 with an average mean absolute percentage error of 32.02,root mean square error of 30.3,mean absolute error of 23.32,and R^(2)(coefficient of determination)of 0.76,outperforming other deep learning models in terms of accuracy and training efficiency on sunspot number datasets.Subsequently,we use it to predict the peaks of SC 25 and SC 26.For SC 25,the peak time has ended,but a stronger peak is predicted for SC 26,of 199.3,within a range of 170.8-221.9,projected to occur during April 2034.展开更多
Wearable sensors integrated with deep learning techniques have the potential to revolutionize seamless human-machine interfaces for real-time health monitoring,clinical diagnosis,and robotic applications.Nevertheless,...Wearable sensors integrated with deep learning techniques have the potential to revolutionize seamless human-machine interfaces for real-time health monitoring,clinical diagnosis,and robotic applications.Nevertheless,it remains a critical challenge to simultaneously achieve desirable mechanical and electrical performance along with biocompatibility,adhesion,self-healing,and environmental robustness with excellent sensing metrics.Herein,we report a multifunctional,anti-freezing,selfadhesive,and self-healable organogel pressure sensor composed of cobalt nanoparticle encapsulated nitrogen-doped carbon nanotubes(CoN CNT)embedded in a polyvinyl alcohol-gelatin(PVA/GLE)matrix.Fabricated using a binary solvent system of water and ethylene glycol(EG),the CoN CNT/PVA/GLE organogel exhibits excellent flexibility,biocompatibility,and temperature tolerance with remarkable environmental stability.Electrochemical impedance spectroscopy confirms near-stable performance across a broad humidity range(40%-95%RH).Freeze-tolerant conductivity under sub-zero conditions(-20℃)is attributed to the synergistic role of CoN CNT and EG,preserving mobility and network integrity.The Co N CNT/PVA/GLE organogel sensor exhibits high sensitivity of 5.75 k Pa^(-1)in the detection range from 0 to 20 k Pa,ideal for subtle biomechanical motion detection.A smart human-machine interface for English letter recognition using deep learning achieved 98%accuracy.The organogel sensor utility was extended to detect human gestures like finger bending,wrist motion,and throat vibration during speech.展开更多
At present,energy consumption is one of the main bottlenecks in autonomous mobile robot development.To address the challenge of high energy consumption in path planning for autonomous mobile robots navigating unknown ...At present,energy consumption is one of the main bottlenecks in autonomous mobile robot development.To address the challenge of high energy consumption in path planning for autonomous mobile robots navigating unknown and complex environments,this paper proposes an Attention-Enhanced Dueling Deep Q-Network(ADDueling DQN),which integrates a multi-head attention mechanism and a prioritized experience replay strategy into a Dueling-DQN reinforcement learning framework.A multi-objective reward function,centered on energy efficiency,is designed to comprehensively consider path length,terrain slope,motion smoothness,and obstacle avoidance,enabling optimal low-energy trajectory generation in 3D space from the source.The incorporation of a multihead attention mechanism allows the model to dynamically focus on energy-critical state features—such as slope gradients and obstacle density—thereby significantly improving its ability to recognize and avoid energy-intensive paths.Additionally,the prioritized experience replay mechanism accelerates learning from key decision-making experiences,suppressing inefficient exploration and guiding the policy toward low-energy solutions more rapidly.The effectiveness of the proposed path planning algorithm is validated through simulation experiments conducted in multiple off-road scenarios.Results demonstrate that AD-Dueling DQN consistently achieves the lowest average energy consumption across all tested environments.Moreover,the proposed method exhibits faster convergence and greater training stability compared to baseline algorithms,highlighting its global optimization capability under energy-aware objectives in complex terrains.This study offers an efficient and scalable intelligent control strategy for the development of energy-conscious autonomous navigation systems.展开更多
Nondestructive measurement technology of phenotype can provide substantial phenotypic data support for applications such as seedling breeding,management,and quality testing.The current method of measuring seedling phe...Nondestructive measurement technology of phenotype can provide substantial phenotypic data support for applications such as seedling breeding,management,and quality testing.The current method of measuring seedling phenotypes mainly relies on manual measurement which is inefficient,subjective and destroys samples.Therefore,the paper proposes a nondestructive measurement method for the canopy phenotype of the watermelon plug seedlings based on deep learning.The Azure Kinect was used to shoot canopy color images,depth images,and RGB-D images of the watermelon plug seedlings.The Mask-RCNN network was used to classify,segment,and count the canopy leaves of the watermelon plug seedlings.To reduce the error of leaf area measurement caused by mutual occlusion of leaves,the leaves were repaired by CycleGAN,and the depth images were restored by image processing.Then,the Delaunay triangulation was adopted to measure the leaf area in the leaf point cloud.The YOLOX target detection network was used to identify the growing point position of each seedling on the plug tray.Then the depth differences between the growing point and the upper surface of the plug tray were calculated to obtain plant height.The experiment results show that the nondestructive measurement algorithm proposed in this paper achieves good measurement performance for the watermelon plug seedlings from the 1 true-leaf to 3 true-leaf stages.The average relative error of measurement is 2.33%for the number of true leaves,4.59%for the number of cotyledons,8.37%for the leaf area,and 3.27%for the plant height.The experiment results demonstrate that the proposed algorithm in this paper provides an effective solution for the nondestructive measurement of the canopy phenotype of the plug seedlings.展开更多
Effective partitioning is crucial for enabling parallel restoration of power systems after blackouts.This paper proposes a novel partitioning method based on deep reinforcement learning.First,the partitioning decision...Effective partitioning is crucial for enabling parallel restoration of power systems after blackouts.This paper proposes a novel partitioning method based on deep reinforcement learning.First,the partitioning decision process is formulated as a Markov decision process(MDP)model to maximize the modularity.Corresponding key partitioning constraints on parallel restoration are considered.Second,based on the partitioning objective and constraints,the reward function of the partitioning MDP model is set by adopting a relative deviation normalization scheme to reduce mutual interference between the reward and penalty in the reward function.The soft bonus scaling mechanism is introduced to mitigate overestimation caused by abrupt jumps in the reward.Then,the deep Q network method is applied to solve the partitioning MDP model and generate partitioning schemes.Two experience replay buffers are employed to speed up the training process of the method.Finally,case studies on the IEEE 39-bus test system demonstrate that the proposed method can generate a high-modularity partitioning result that meets all key partitioning constraints,thereby improving the parallelism and reliability of the restoration process.Moreover,simulation results demonstrate that an appropriate discount factor is crucial for ensuring both the convergence speed and the stability of the partitioning training.展开更多
Underwater pipeline inspection plays a vital role in the proactive maintenance and management of critical marine infrastructure and subaquatic systems.However,the inspection of underwater pipelines presents a challeng...Underwater pipeline inspection plays a vital role in the proactive maintenance and management of critical marine infrastructure and subaquatic systems.However,the inspection of underwater pipelines presents a challenge due to factors such as light scattering,absorption,restricted visibility,and ambient noise.The advancement of deep learning has introduced powerful techniques for processing large amounts of unstructured and imperfect data collected from underwater environments.This study evaluated the efficacy of the You Only Look Once(YOLO)algorithm,a real-time object detection and localization model based on convolutional neural networks,in identifying and classifying various types of pipeline defects in underwater settings.YOLOv8,the latest evolution in the YOLO family,integrates advanced capabilities,such as anchor-free detection,a cross-stage partial network backbone for efficient feature extraction,and a feature pyramid network+path aggregation network neck for robust multi-scale object detection,which make it particularly well-suited for complex underwater environments.Due to the lack of suitable open-access datasets for underwater pipeline defects,a custom dataset was captured using a remotely operated vehicle in a controlled environment.This application has the following assets available for use.Extensive experimentation demonstrated that YOLOv8 X-Large consistently outperformed other models in terms of pipe defect detection and classification and achieved a strong balance between precision and recall in identifying pipeline cracks,rust,corners,defective welds,flanges,tapes,and holes.This research establishes the baseline performance of YOLOv8 for underwater defect detection and showcases its potential to enhance the reliability and efficiency of pipeline inspection tasks in challenging underwater environments.展开更多
This systematic review aims to comprehensively examine and compare deep learning methods for brain tumor segmentation and classification using MRI and other imaging modalities,focusing on recent trends from 2022 to 20...This systematic review aims to comprehensively examine and compare deep learning methods for brain tumor segmentation and classification using MRI and other imaging modalities,focusing on recent trends from 2022 to 2025.The primary objective is to evaluate methodological advancements,model performance,dataset usage,and existing challenges in developing clinically robust AI systems.We included peer-reviewed journal articles and highimpact conference papers published between 2022 and 2025,written in English,that proposed or evaluated deep learning methods for brain tumor segmentation and/or classification.Excluded were non-open-access publications,books,and non-English articles.A structured search was conducted across Scopus,Google Scholar,Wiley,and Taylor&Francis,with the last search performed in August 2025.Risk of bias was not formally quantified but considered during full-text screening based on dataset diversity,validation methods,and availability of performance metrics.We used narrative synthesis and tabular benchmarking to compare performance metrics(e.g.,accuracy,Dice score)across model types(CNN,Transformer,Hybrid),imaging modalities,and datasets.A total of 49 studies were included(43 journal articles and 6 conference papers).These studies spanned over 9 public datasets(e.g.,BraTS,Figshare,REMBRANDT,MOLAB)and utilized a range of imaging modalities,predominantly MRI.Hybrid models,especially ResViT and UNetFormer,consistently achieved high performance,with classification accuracy exceeding 98%and segmentation Dice scores above 0.90 across multiple studies.Transformers and hybrid architectures showed increasing adoption post2023.Many studies lacked external validation and were evaluated only on a few benchmark datasets,raising concerns about generalizability and dataset bias.Few studies addressed clinical interpretability or uncertainty quantification.Despite promising results,particularly for hybrid deep learning models,widespread clinical adoption remains limited due to lack of validation,interpretability concerns,and real-world deployment barriers.展开更多
Deep learning-based methods have become alternatives to traditional numerical weather prediction systems,offering faster computation and the ability to utilize large historical datasets.However,the application of deep...Deep learning-based methods have become alternatives to traditional numerical weather prediction systems,offering faster computation and the ability to utilize large historical datasets.However,the application of deep learning to medium-range regional weather forecasting with limited data remains a significant challenge.In this work,three key solutions are proposed:(1)motivated by the need to improve model performance in data-scarce regional forecasting scenarios,the authors innovatively apply semantic segmentation models,to better capture spatiotemporal features and improve prediction accuracy;(2)recognizing the challenge of overfitting and the inability of traditional noise-based data augmentation methods to effectively enhance model robustness,a novel learnable Gaussian noise mechanism is introduced that allows the model to adaptively optimize perturbations for different locations,ensuring more effective learning;and(3)to address the issue of error accumulation in autoregressive prediction,as well as the challenge of learning difficulty and the lack of intermediate data utilization in one-shot prediction,the authors propose a cascade prediction approach that effectively resolves these problems while significantly improving model forecasting performance.The method achieves a competitive result in The East China Regional AI Medium Range Weather Forecasting Competition.Ablation experiments further validate the effectiveness of each component,highlighting their contributions to enhancing prediction performance.展开更多
Honeycombing Lung(HCL)is a chronic lung condition marked by advanced fibrosis,resulting in enlarged air spaces with thick fibrotic walls,which are visible on Computed Tomography(CT)scans.Differentiating between normal...Honeycombing Lung(HCL)is a chronic lung condition marked by advanced fibrosis,resulting in enlarged air spaces with thick fibrotic walls,which are visible on Computed Tomography(CT)scans.Differentiating between normal lung tissue,honeycombing lungs,and Ground Glass Opacity(GGO)in CT images is often challenging for radiologists and may lead to misinterpretations.Although earlier studies have proposed models to detect and classify HCL,many faced limitations such as high computational demands,lower accuracy,and difficulty distinguishing between HCL and GGO.CT images are highly effective for lung classification due to their high resolution,3D visualization,and sensitivity to tissue density variations.This study introduces Honeycombing Lungs Network(HCL Net),a novel classification algorithm inspired by ResNet50V2 and enhanced to overcome the shortcomings of previous approaches.HCL Net incorporates additional residual blocks,refined preprocessing techniques,and selective parameter tuning to improve classification performance.The dataset,sourced from the University Malaya Medical Centre(UMMC)and verified by expert radiologists,consists of CT images of normal,honeycombing,and GGO lungs.Experimental evaluations across five assessments demonstrated that HCL Net achieved an outstanding classification accuracy of approximately 99.97%.It also recorded strong performance in other metrics,achieving 93%precision,100%sensitivity,89%specificity,and an AUC-ROC score of 97%.Comparative analysis with baseline feature engineering methods confirmed the superior efficacy of HCL Net.The model significantly reduces misclassification,particularly between honeycombing and GGO lungs,enhancing diagnostic precision and reliability in lung image analysis.展开更多
Vehicle Edge Computing(VEC)and Cloud Computing(CC)significantly enhance the processing efficiency of delay-sensitive and computation-intensive applications by offloading compute-intensive tasks from resource-constrain...Vehicle Edge Computing(VEC)and Cloud Computing(CC)significantly enhance the processing efficiency of delay-sensitive and computation-intensive applications by offloading compute-intensive tasks from resource-constrained onboard devices to nearby Roadside Unit(RSU),thereby achieving lower delay and energy consumption.However,due to the limited storage capacity and energy budget of RSUs,it is challenging to meet the demands of the highly dynamic Internet of Vehicles(IoV)environment.Therefore,determining reasonable service caching and computation offloading strategies is crucial.To address this,this paper proposes a joint service caching scheme for cloud-edge collaborative IoV computation offloading.By modeling the dynamic optimization problem using Markov Decision Processes(MDP),the scheme jointly optimizes task delay,energy consumption,load balancing,and privacy entropy to achieve better quality of service.Additionally,a dynamic adaptive multi-objective deep reinforcement learning algorithm is proposed.Each Double Deep Q-Network(DDQN)agent obtains rewards for different objectives based on distinct reward functions and dynamically updates the objective weights by learning the value changes between objectives using Radial Basis Function Networks(RBFN),thereby efficiently approximating the Pareto-optimal decisions for multiple objectives.Extensive experiments demonstrate that the proposed algorithm can better coordinate the three-tier computing resources of cloud,edge,and vehicles.Compared to existing algorithms,the proposed method reduces task delay and energy consumption by 10.64%and 5.1%,respectively.展开更多
With the growing advancement of wireless communication technologies,WiFi-based human sensing has gained increasing attention as a non-intrusive and device-free solution.Among the available signal types,Channel State I...With the growing advancement of wireless communication technologies,WiFi-based human sensing has gained increasing attention as a non-intrusive and device-free solution.Among the available signal types,Channel State Information(CSI)offers fine-grained temporal,frequency,and spatial insights into multipath propagation,making it a crucial data source for human-centric sensing.Recently,the integration of deep learning has significantly improved the robustness and automation of feature extraction from CSI in complex environments.This paper provides a comprehensive review of deep learning-enhanced human sensing based on CSI.We first outline mainstream CSI acquisition tools and their hardware specifications,then provide a detailed discussion of preprocessing methods such as denoising,time–frequency transformation,data segmentation,and augmentation.Subsequently,we categorize deep learning approaches according to sensing tasks—namely detection,localization,and recognition—and highlight representative models across application scenarios.Finally,we examine key challenges including domain generalization,multi-user interference,and limited data availability,and we propose future research directions involving lightweight model deployment,multimodal data fusion,and semantic-level sensing.展开更多
An image processing and deep learning method for identifying different types of rock images was proposed.Preprocessing,such as rock image acquisition,gray scaling,Gaussian blurring,and feature dimensionality reduction...An image processing and deep learning method for identifying different types of rock images was proposed.Preprocessing,such as rock image acquisition,gray scaling,Gaussian blurring,and feature dimensionality reduction,was conducted to extract useful feature information and recognize and classify rock images using Tensor Flow-based convolutional neural network(CNN)and Py Qt5.A rock image dataset was established and separated into workouts,confirmation sets,and test sets.The framework was subsequently compiled and trained.The categorization approach was evaluated using image data from the validation and test datasets,and key metrics,such as accuracy,precision,and recall,were analyzed.Finally,the classification model conducted a probabilistic analysis of the measured data to determine the equivalent lithological type for each image.The experimental results indicated that the method combining deep learning,Tensor Flow-based CNN,and Py Qt5 to recognize and classify rock images has an accuracy rate of up to 98.8%,and can be successfully utilized for rock image recognition.The system can be extended to geological exploration,mine engineering,and other rock and mineral resource development to more efficiently and accurately recognize rock samples.Moreover,it can match them with the intelligent support design system to effectively improve the reliability and economy of the support scheme.The system can serve as a reference for supporting the design of other mining and underground space projects.展开更多
The rapid growth of biomedical data,particularly multi-omics data including genomes,transcriptomics,proteomics,metabolomics,and epigenomics,medical research and clinical decision-making confront both new opportunities...The rapid growth of biomedical data,particularly multi-omics data including genomes,transcriptomics,proteomics,metabolomics,and epigenomics,medical research and clinical decision-making confront both new opportunities and obstacles.The huge and diversified nature of these datasets cannot always be managed using traditional data analysis methods.As a consequence,deep learning has emerged as a strong tool for analysing numerous omics data due to its ability to handle complex and non-linear relationships.This paper explores the fundamental concepts of deep learning and how they are used in multi-omics medical data mining.We demonstrate how autoencoders,variational autoencoders,multimodal models,attention mechanisms,transformers,and graph neural networks enable pattern analysis and recognition across all omics data.Deep learning has been found to be effective in illness classification,biomarker identification,gene network learning,and therapeutic efficacy prediction.We also consider critical problems like as data quality,model explainability,whether findings can be repeated,and computational power requirements.We now consider future elements of combining omics with clinical and imaging data,explainable AI,federated learning,and real-time diagnostics.Overall,this study emphasises the need of collaborating across disciplines to advance deep learning-based multi-omics research for precision medicine and comprehending complicated disorders.展开更多
Heart disease remains a leading cause of mortality worldwide,emphasizing the urgent need for reliable and interpretable predictive models to support early diagnosis and timely intervention.However,existing Deep Learni...Heart disease remains a leading cause of mortality worldwide,emphasizing the urgent need for reliable and interpretable predictive models to support early diagnosis and timely intervention.However,existing Deep Learning(DL)approaches often face several limitations,including inefficient feature extraction,class imbalance,suboptimal classification performance,and limited interpretability,which collectively hinder their deployment in clinical settings.To address these challenges,we propose a novel DL framework for heart disease prediction that integrates a comprehensive preprocessing pipeline with an advanced classification architecture.The preprocessing stage involves label encoding and feature scaling.To address the issue of class imbalance inherent in the personal key indicators of the heart disease dataset,the localized random affine shadowsampling technique is employed,which enhances minority class representation while minimizing overfitting.At the core of the framework lies the Deep Residual Network(DeepResNet),which employs hierarchical residual transformations to facilitate efficient feature extraction and capture complex,non-linear relationships in the data.Experimental results demonstrate that the proposed model significantly outperforms existing techniques,achieving improvements of 3.26%in accuracy,3.16%in area under the receiver operating characteristics,1.09%in recall,and 1.07%in F1-score.Furthermore,robustness is validated using 10-fold crossvalidation,confirming the model’s generalizability across diverse data distributions.Moreover,model interpretability is ensured through the integration of Shapley additive explanations and local interpretable model-agnostic explanations,offering valuable insights into the contribution of individual features to model predictions.Overall,the proposed DL framework presents a robust,interpretable,and clinically applicable solution for heart disease prediction.展开更多
Modern manufacturing processes have become more reliant on automation because of the accelerated transition from Industry 3.0 to Industry 4.0.Manual inspection of products on assembly lines remains inefficient,prone t...Modern manufacturing processes have become more reliant on automation because of the accelerated transition from Industry 3.0 to Industry 4.0.Manual inspection of products on assembly lines remains inefficient,prone to errors and lacks consistency,emphasizing the need for a reliable and automated inspection system.Leveraging both object detection and image segmentation approaches,this research proposes a vision-based solution for the detection of various kinds of tools in the toolkit using deep learning(DL)models.Two Intel RealSense D455f depth cameras were arranged in a top down configuration to capture both RGB and depth images of the toolkits.After applying multiple constraints and enhancing them through preprocessing and augmentation,a dataset consisting of 3300 annotated RGB-D photos was generated.Several DL models were selected through a comprehensive assessment of mean Average Precision(mAP),precision-recall equilibrium,inference latency(target≥30 FPS),and computational burden,resulting in a preference for YOLO and Region-based Convolutional Neural Networks(R-CNN)variants over ViT-based models due to the latter’s increased latency and resource requirements.YOLOV5,YOLOV8,YOLOV11,Faster R-CNN,and Mask R-CNN were trained on the annotated dataset and evaluated using key performance metrics(Recall,Accuracy,F1-score,and Precision).YOLOV11 demonstrated balanced excellence with 93.0%precision,89.9%recall,and a 90.6%F1-score in object detection,as well as 96.9%precision,95.3%recall,and a 96.5%F1-score in instance segmentation with an average inference time of 25 ms per frame(≈40 FPS),demonstrating real-time performance.Leveraging these results,a YOLOV11-based windows application was successfully deployed in a real-time assembly line environment,where it accurately processed live video streams to detect and segment tools within toolkits,demonstrating its practical effectiveness in industrial automation.The application is capable of precisely measuring socket dimensions by utilising edge detection techniques on YOLOv11 segmentation masks,in addition to detection and segmentation.This makes it possible to do specification-level quality control right on the assembly line,which improves the ability to examine things in real time.The implementation is a big step forward for intelligent manufacturing in the Industry 4.0 paradigm.It provides a scalable,efficient,and accurate way to do automated inspection and dimensional verification activities.展开更多
The exponential growth of the Internet of Things(IoT)has introduced significant security challenges,with zero-day attacks emerging as one of the most critical and challenging threats.Traditional Machine Learning(ML)an...The exponential growth of the Internet of Things(IoT)has introduced significant security challenges,with zero-day attacks emerging as one of the most critical and challenging threats.Traditional Machine Learning(ML)and Deep Learning(DL)techniques have demonstrated promising early detection capabilities.However,their effectiveness is limited when handling the vast volumes of IoT-generated data due to scalability constraints,high computational costs,and the costly time-intensive process of data labeling.To address these challenges,this study proposes a Federated Learning(FL)framework that leverages collaborative and hybrid supervised learning to enhance cyber threat detection in IoT networks.By employing Deep Neural Networks(DNNs)and decentralized model training,the approach reduces computational complexity while improving detection accuracy.The proposed model demonstrates robust performance,achieving accuracies of 94.34%,99.95%,and 87.94%on the publicly available kitsune,Bot-IoT,and UNSW-NB15 datasets,respectively.Furthermore,its ability to detect zero-day attacks is validated through evaluations on two additional benchmark datasets,TON-IoT and IoT-23,using a Deep Federated Learning(DFL)framework,underscoring the generalization and effectiveness of the model in heterogeneous and decentralized IoT environments.Experimental results demonstrate superior performance over existing methods,establishing the proposed framework as an efficient and scalable solution for IoT security.展开更多
基金The research is supported by internal funding from SINTEF through a strategic project focusing on Machine Learning and Digitalization in the infrastructure sector.
文摘Neural networks with physical governing equations as constraints have recently created a new trend in machine learning research.In this context,a review of related research is first presented and discussed.The potential offered by such physics-informed deep learning models for computations in geomechanics is demonstrated by application to one-dimensional(1D)consolidation.The governing equation for 1D problems is applied as a constraint in the deep learning model.The deep learning model relies on automatic differentiation for applying the governing equation as a constraint,based on the mathematical approximations established by the neural network.The total loss is measured as a combination of the training loss(based on analytical and model predicted solutions)and the constraint loss(a requirement to satisfy the governing equation).Two classes of problems are considered:forward and inverse problems.The forward problems demonstrate the performance of a physically constrained neural network model in predicting solutions for 1D consolidation problems.Inverse problems show prediction of the coefficient of consolidation.Terzaghi’s problem,with varying boundary conditions,is used as a numerical example and the deep learning model shows a remarkable performance in both the forward and inverse problems.While the application demonstrated here is a simple 1D consolidation problem,such a deep learning model integrated with a physical law has significant implications for use in,such as,faster realtime numerical prediction for digital twins,numerical model reproducibility and constitutive model parameter optimization.
文摘Physics-informed deep learning has drawn tremendous interest in recent years to solve computational physics problems,whose basic concept is to embed physical laws to constrain/inform neural networks,with the need of less data for training a reliable model.This can be achieved by incorporating the residual of physics equations into the loss function.Through minimizing the loss function,the network could approximate the solution.In this paper,we propose a mixed-variable scheme of physics-informed neural network(PINN)for fluid dynamics and apply it to simulate steady and transient laminar flows at low Reynolds numbers.A parametric study indicates that the mixed-variable scheme can improve the PINN trainability and the solution accuracy.The predicted velocity and pressure fields by the proposed PINN approach are also compared with the reference numerical solutions.Simulation results demonstrate great potential of the proposed PINN for fluid flow simulation with a high accuracy.
文摘In this work,a physics-informed neural network(PINN)designed specifically for analyzing digital mate-rials is introduced.This proposed machine learning(ML)model can be trained free of ground truth data by adopting the minimum energy criteria as its loss function.Results show that our energy-based PINN reaches similar accuracy as supervised ML models.Adding a hinge loss on the Jacobian can constrain the model to avoid erroneous deformation gradient caused by the nonlinear logarithmic strain.Lastly,we discuss how the strain energy of each material element at each numerical integration point can be calculated parallelly on a GPU.The algorithm is tested on different mesh densities to evaluate its com-putational efficiency which scales linearly with respect to the number of nodes in the system.This work provides a foundation for encoding physical behaviors of digital materials directly into neural networks,enabling label-free learning for the design of next-generation composites.
基金Funding for this research was provided by NSF[1910397,2106989]NCDOT[TCE2020-01].
文摘Accurate traffic forecasting is crucial for understanding and managing congestion for effi-cient transportation planning.However,conventional approaches often neglect epistemic uncertainty,which arises from incomplete knowledge across different spatiotemporal scales.This study addresses this challenge by introducing a novel methodology to establish dynamic spatiotemporal correlations that captures the unobserved heterogeneity in travel time through distinct peaks in probability density functions,guided by physics-based prin-ciples.We propose an innovative approach to modifying both prediction and correction steps of the Kalman filter(KF)algorithm by leveraging established spatiotemporal correla-tions.Central to our approach is the development of a novel deep learning(DL)model called the physics informed-graph convolutional gated recurrent neural network(PI-GRNN).Functioning as the state-space model within the KF,the PI-GRNN exploits estab-lished correlations to construct dynamic adjacency matrices that utilize the inherent struc-ture and relationships within the transportation network to capture sequential patterns and dependencies over time.Furthermore,our methodology integrates insights gained from correlations into the correction step of the KF algorithm that helps in enhancing its correctional capabilities.This integrated approach proves instrumental in alleviating the inherent model drift associated with data-driven methods,as periodic corrections through update step of KF refine the predictions generated by the PI-GRNN.To the best of our knowledge,this study represents a pioneering effort in integrating DL and KF algorithms in this unique symbiotic manner.Through extensive experimentation with real-world traf-fic data,we demonstrate the superior performance of our model compared to the bench-mark approaches.
基金supported by the Academic Research Projects of Beijing Union University(ZK20202204)the National Natural Science Foundation of China(12250005,12073040,12273059,11973056,12003051,11573037,12073041,11427901,11572005,11611530679 and 12473052)+1 种基金the Strategic Priority Research Program of the China Academy of Sciences(XDB0560000,XDA15052200,XDB09040200,XDA15010700,XDB0560301,and XDA15320102)the Chinese Meridian Project(CMP).
文摘The solar cycle(SC),a phenomenon caused by the quasi-periodic regular activities in the Sun,occurs approximately every 11 years.Intense solar activity can disrupt the Earth’s ionosphere,affecting communication and navigation systems.Consequently,accurately predicting the intensity of the SC holds great significance,but predicting the SC involves a long-term time series,and many existing time series forecasting methods have fallen short in terms of accuracy and efficiency.The Time-series Dense Encoder model is a deep learning solution tailored for long time series prediction.Based on a multi-layer perceptron structure,it outperforms the best previously existing models in accuracy,while being efficiently trainable on general datasets.We propose a method based on this model for SC forecasting.Using a trained model,we predict the test set from SC 19 to SC 25 with an average mean absolute percentage error of 32.02,root mean square error of 30.3,mean absolute error of 23.32,and R^(2)(coefficient of determination)of 0.76,outperforming other deep learning models in terms of accuracy and training efficiency on sunspot number datasets.Subsequently,we use it to predict the peaks of SC 25 and SC 26.For SC 25,the peak time has ended,but a stronger peak is predicted for SC 26,of 199.3,within a range of 170.8-221.9,projected to occur during April 2034.
基金supported by the Basic Science Research Program(2023R1A2C3004336,RS-202300243807)&Regional Leading Research Center(RS-202400405278)through the National Research Foundation of Korea(NRF)grant funded by the Korea Government(MSIT)。
文摘Wearable sensors integrated with deep learning techniques have the potential to revolutionize seamless human-machine interfaces for real-time health monitoring,clinical diagnosis,and robotic applications.Nevertheless,it remains a critical challenge to simultaneously achieve desirable mechanical and electrical performance along with biocompatibility,adhesion,self-healing,and environmental robustness with excellent sensing metrics.Herein,we report a multifunctional,anti-freezing,selfadhesive,and self-healable organogel pressure sensor composed of cobalt nanoparticle encapsulated nitrogen-doped carbon nanotubes(CoN CNT)embedded in a polyvinyl alcohol-gelatin(PVA/GLE)matrix.Fabricated using a binary solvent system of water and ethylene glycol(EG),the CoN CNT/PVA/GLE organogel exhibits excellent flexibility,biocompatibility,and temperature tolerance with remarkable environmental stability.Electrochemical impedance spectroscopy confirms near-stable performance across a broad humidity range(40%-95%RH).Freeze-tolerant conductivity under sub-zero conditions(-20℃)is attributed to the synergistic role of CoN CNT and EG,preserving mobility and network integrity.The Co N CNT/PVA/GLE organogel sensor exhibits high sensitivity of 5.75 k Pa^(-1)in the detection range from 0 to 20 k Pa,ideal for subtle biomechanical motion detection.A smart human-machine interface for English letter recognition using deep learning achieved 98%accuracy.The organogel sensor utility was extended to detect human gestures like finger bending,wrist motion,and throat vibration during speech.
文摘At present,energy consumption is one of the main bottlenecks in autonomous mobile robot development.To address the challenge of high energy consumption in path planning for autonomous mobile robots navigating unknown and complex environments,this paper proposes an Attention-Enhanced Dueling Deep Q-Network(ADDueling DQN),which integrates a multi-head attention mechanism and a prioritized experience replay strategy into a Dueling-DQN reinforcement learning framework.A multi-objective reward function,centered on energy efficiency,is designed to comprehensively consider path length,terrain slope,motion smoothness,and obstacle avoidance,enabling optimal low-energy trajectory generation in 3D space from the source.The incorporation of a multihead attention mechanism allows the model to dynamically focus on energy-critical state features—such as slope gradients and obstacle density—thereby significantly improving its ability to recognize and avoid energy-intensive paths.Additionally,the prioritized experience replay mechanism accelerates learning from key decision-making experiences,suppressing inefficient exploration and guiding the policy toward low-energy solutions more rapidly.The effectiveness of the proposed path planning algorithm is validated through simulation experiments conducted in multiple off-road scenarios.Results demonstrate that AD-Dueling DQN consistently achieves the lowest average energy consumption across all tested environments.Moreover,the proposed method exhibits faster convergence and greater training stability compared to baseline algorithms,highlighting its global optimization capability under energy-aware objectives in complex terrains.This study offers an efficient and scalable intelligent control strategy for the development of energy-conscious autonomous navigation systems.
基金funded by the National Key Research and Development Program of China(Grant No.2019YFD1001900)the HZAU-AGIS Cooperation Fund(Grant No.SZYJY2022006).
文摘Nondestructive measurement technology of phenotype can provide substantial phenotypic data support for applications such as seedling breeding,management,and quality testing.The current method of measuring seedling phenotypes mainly relies on manual measurement which is inefficient,subjective and destroys samples.Therefore,the paper proposes a nondestructive measurement method for the canopy phenotype of the watermelon plug seedlings based on deep learning.The Azure Kinect was used to shoot canopy color images,depth images,and RGB-D images of the watermelon plug seedlings.The Mask-RCNN network was used to classify,segment,and count the canopy leaves of the watermelon plug seedlings.To reduce the error of leaf area measurement caused by mutual occlusion of leaves,the leaves were repaired by CycleGAN,and the depth images were restored by image processing.Then,the Delaunay triangulation was adopted to measure the leaf area in the leaf point cloud.The YOLOX target detection network was used to identify the growing point position of each seedling on the plug tray.Then the depth differences between the growing point and the upper surface of the plug tray were calculated to obtain plant height.The experiment results show that the nondestructive measurement algorithm proposed in this paper achieves good measurement performance for the watermelon plug seedlings from the 1 true-leaf to 3 true-leaf stages.The average relative error of measurement is 2.33%for the number of true leaves,4.59%for the number of cotyledons,8.37%for the leaf area,and 3.27%for the plant height.The experiment results demonstrate that the proposed algorithm in this paper provides an effective solution for the nondestructive measurement of the canopy phenotype of the plug seedlings.
基金funded by the Beijing Engineering Research Center of Electric Rail Transportation.
文摘Effective partitioning is crucial for enabling parallel restoration of power systems after blackouts.This paper proposes a novel partitioning method based on deep reinforcement learning.First,the partitioning decision process is formulated as a Markov decision process(MDP)model to maximize the modularity.Corresponding key partitioning constraints on parallel restoration are considered.Second,based on the partitioning objective and constraints,the reward function of the partitioning MDP model is set by adopting a relative deviation normalization scheme to reduce mutual interference between the reward and penalty in the reward function.The soft bonus scaling mechanism is introduced to mitigate overestimation caused by abrupt jumps in the reward.Then,the deep Q network method is applied to solve the partitioning MDP model and generate partitioning schemes.Two experience replay buffers are employed to speed up the training process of the method.Finally,case studies on the IEEE 39-bus test system demonstrate that the proposed method can generate a high-modularity partitioning result that meets all key partitioning constraints,thereby improving the parallelism and reliability of the restoration process.Moreover,simulation results demonstrate that an appropriate discount factor is crucial for ensuring both the convergence speed and the stability of the partitioning training.
文摘Underwater pipeline inspection plays a vital role in the proactive maintenance and management of critical marine infrastructure and subaquatic systems.However,the inspection of underwater pipelines presents a challenge due to factors such as light scattering,absorption,restricted visibility,and ambient noise.The advancement of deep learning has introduced powerful techniques for processing large amounts of unstructured and imperfect data collected from underwater environments.This study evaluated the efficacy of the You Only Look Once(YOLO)algorithm,a real-time object detection and localization model based on convolutional neural networks,in identifying and classifying various types of pipeline defects in underwater settings.YOLOv8,the latest evolution in the YOLO family,integrates advanced capabilities,such as anchor-free detection,a cross-stage partial network backbone for efficient feature extraction,and a feature pyramid network+path aggregation network neck for robust multi-scale object detection,which make it particularly well-suited for complex underwater environments.Due to the lack of suitable open-access datasets for underwater pipeline defects,a custom dataset was captured using a remotely operated vehicle in a controlled environment.This application has the following assets available for use.Extensive experimentation demonstrated that YOLOv8 X-Large consistently outperformed other models in terms of pipe defect detection and classification and achieved a strong balance between precision and recall in identifying pipeline cracks,rust,corners,defective welds,flanges,tapes,and holes.This research establishes the baseline performance of YOLOv8 for underwater defect detection and showcases its potential to enhance the reliability and efficiency of pipeline inspection tasks in challenging underwater environments.
文摘This systematic review aims to comprehensively examine and compare deep learning methods for brain tumor segmentation and classification using MRI and other imaging modalities,focusing on recent trends from 2022 to 2025.The primary objective is to evaluate methodological advancements,model performance,dataset usage,and existing challenges in developing clinically robust AI systems.We included peer-reviewed journal articles and highimpact conference papers published between 2022 and 2025,written in English,that proposed or evaluated deep learning methods for brain tumor segmentation and/or classification.Excluded were non-open-access publications,books,and non-English articles.A structured search was conducted across Scopus,Google Scholar,Wiley,and Taylor&Francis,with the last search performed in August 2025.Risk of bias was not formally quantified but considered during full-text screening based on dataset diversity,validation methods,and availability of performance metrics.We used narrative synthesis and tabular benchmarking to compare performance metrics(e.g.,accuracy,Dice score)across model types(CNN,Transformer,Hybrid),imaging modalities,and datasets.A total of 49 studies were included(43 journal articles and 6 conference papers).These studies spanned over 9 public datasets(e.g.,BraTS,Figshare,REMBRANDT,MOLAB)and utilized a range of imaging modalities,predominantly MRI.Hybrid models,especially ResViT and UNetFormer,consistently achieved high performance,with classification accuracy exceeding 98%and segmentation Dice scores above 0.90 across multiple studies.Transformers and hybrid architectures showed increasing adoption post2023.Many studies lacked external validation and were evaluated only on a few benchmark datasets,raising concerns about generalizability and dataset bias.Few studies addressed clinical interpretability or uncertainty quantification.Despite promising results,particularly for hybrid deep learning models,widespread clinical adoption remains limited due to lack of validation,interpretability concerns,and real-world deployment barriers.
基金supported by the National Natural Science Foundation of China[grant number 62376217]the Young Elite Scientists Sponsorship Program by CAST[grant number 2023QNRC001]the Joint Research Project for Meteorological Capacity Improvement[grant number 24NLTSZ003]。
文摘Deep learning-based methods have become alternatives to traditional numerical weather prediction systems,offering faster computation and the ability to utilize large historical datasets.However,the application of deep learning to medium-range regional weather forecasting with limited data remains a significant challenge.In this work,three key solutions are proposed:(1)motivated by the need to improve model performance in data-scarce regional forecasting scenarios,the authors innovatively apply semantic segmentation models,to better capture spatiotemporal features and improve prediction accuracy;(2)recognizing the challenge of overfitting and the inability of traditional noise-based data augmentation methods to effectively enhance model robustness,a novel learnable Gaussian noise mechanism is introduced that allows the model to adaptively optimize perturbations for different locations,ensuring more effective learning;and(3)to address the issue of error accumulation in autoregressive prediction,as well as the challenge of learning difficulty and the lack of intermediate data utilization in one-shot prediction,the authors propose a cascade prediction approach that effectively resolves these problems while significantly improving model forecasting performance.The method achieves a competitive result in The East China Regional AI Medium Range Weather Forecasting Competition.Ablation experiments further validate the effectiveness of each component,highlighting their contributions to enhancing prediction performance.
文摘Honeycombing Lung(HCL)is a chronic lung condition marked by advanced fibrosis,resulting in enlarged air spaces with thick fibrotic walls,which are visible on Computed Tomography(CT)scans.Differentiating between normal lung tissue,honeycombing lungs,and Ground Glass Opacity(GGO)in CT images is often challenging for radiologists and may lead to misinterpretations.Although earlier studies have proposed models to detect and classify HCL,many faced limitations such as high computational demands,lower accuracy,and difficulty distinguishing between HCL and GGO.CT images are highly effective for lung classification due to their high resolution,3D visualization,and sensitivity to tissue density variations.This study introduces Honeycombing Lungs Network(HCL Net),a novel classification algorithm inspired by ResNet50V2 and enhanced to overcome the shortcomings of previous approaches.HCL Net incorporates additional residual blocks,refined preprocessing techniques,and selective parameter tuning to improve classification performance.The dataset,sourced from the University Malaya Medical Centre(UMMC)and verified by expert radiologists,consists of CT images of normal,honeycombing,and GGO lungs.Experimental evaluations across five assessments demonstrated that HCL Net achieved an outstanding classification accuracy of approximately 99.97%.It also recorded strong performance in other metrics,achieving 93%precision,100%sensitivity,89%specificity,and an AUC-ROC score of 97%.Comparative analysis with baseline feature engineering methods confirmed the superior efficacy of HCL Net.The model significantly reduces misclassification,particularly between honeycombing and GGO lungs,enhancing diagnostic precision and reliability in lung image analysis.
基金supported by Key Science and Technology Program of Henan Province,China(Grant Nos.242102210147,242102210027)Fujian Province Young and Middle aged Teacher Education Research Project(Science and Technology Category)(No.JZ240101)(Corresponding author:Dong Yuan).
文摘Vehicle Edge Computing(VEC)and Cloud Computing(CC)significantly enhance the processing efficiency of delay-sensitive and computation-intensive applications by offloading compute-intensive tasks from resource-constrained onboard devices to nearby Roadside Unit(RSU),thereby achieving lower delay and energy consumption.However,due to the limited storage capacity and energy budget of RSUs,it is challenging to meet the demands of the highly dynamic Internet of Vehicles(IoV)environment.Therefore,determining reasonable service caching and computation offloading strategies is crucial.To address this,this paper proposes a joint service caching scheme for cloud-edge collaborative IoV computation offloading.By modeling the dynamic optimization problem using Markov Decision Processes(MDP),the scheme jointly optimizes task delay,energy consumption,load balancing,and privacy entropy to achieve better quality of service.Additionally,a dynamic adaptive multi-objective deep reinforcement learning algorithm is proposed.Each Double Deep Q-Network(DDQN)agent obtains rewards for different objectives based on distinct reward functions and dynamically updates the objective weights by learning the value changes between objectives using Radial Basis Function Networks(RBFN),thereby efficiently approximating the Pareto-optimal decisions for multiple objectives.Extensive experiments demonstrate that the proposed algorithm can better coordinate the three-tier computing resources of cloud,edge,and vehicles.Compared to existing algorithms,the proposed method reduces task delay and energy consumption by 10.64%and 5.1%,respectively.
基金supported by National Natural Science Foundation of China(NSFC)under grant U23A20310.
文摘With the growing advancement of wireless communication technologies,WiFi-based human sensing has gained increasing attention as a non-intrusive and device-free solution.Among the available signal types,Channel State Information(CSI)offers fine-grained temporal,frequency,and spatial insights into multipath propagation,making it a crucial data source for human-centric sensing.Recently,the integration of deep learning has significantly improved the robustness and automation of feature extraction from CSI in complex environments.This paper provides a comprehensive review of deep learning-enhanced human sensing based on CSI.We first outline mainstream CSI acquisition tools and their hardware specifications,then provide a detailed discussion of preprocessing methods such as denoising,time–frequency transformation,data segmentation,and augmentation.Subsequently,we categorize deep learning approaches according to sensing tasks—namely detection,localization,and recognition—and highlight representative models across application scenarios.Finally,we examine key challenges including domain generalization,multi-user interference,and limited data availability,and we propose future research directions involving lightweight model deployment,multimodal data fusion,and semantic-level sensing.
基金financially supported by the National Science and Technology Major Project——Deep Earth Probe and Mineral Resources Exploration(No.2024ZD1003701)the National Key R&D Program of China(No.2022YFC2905004)。
文摘An image processing and deep learning method for identifying different types of rock images was proposed.Preprocessing,such as rock image acquisition,gray scaling,Gaussian blurring,and feature dimensionality reduction,was conducted to extract useful feature information and recognize and classify rock images using Tensor Flow-based convolutional neural network(CNN)and Py Qt5.A rock image dataset was established and separated into workouts,confirmation sets,and test sets.The framework was subsequently compiled and trained.The categorization approach was evaluated using image data from the validation and test datasets,and key metrics,such as accuracy,precision,and recall,were analyzed.Finally,the classification model conducted a probabilistic analysis of the measured data to determine the equivalent lithological type for each image.The experimental results indicated that the method combining deep learning,Tensor Flow-based CNN,and Py Qt5 to recognize and classify rock images has an accuracy rate of up to 98.8%,and can be successfully utilized for rock image recognition.The system can be extended to geological exploration,mine engineering,and other rock and mineral resource development to more efficiently and accurately recognize rock samples.Moreover,it can match them with the intelligent support design system to effectively improve the reliability and economy of the support scheme.The system can serve as a reference for supporting the design of other mining and underground space projects.
文摘The rapid growth of biomedical data,particularly multi-omics data including genomes,transcriptomics,proteomics,metabolomics,and epigenomics,medical research and clinical decision-making confront both new opportunities and obstacles.The huge and diversified nature of these datasets cannot always be managed using traditional data analysis methods.As a consequence,deep learning has emerged as a strong tool for analysing numerous omics data due to its ability to handle complex and non-linear relationships.This paper explores the fundamental concepts of deep learning and how they are used in multi-omics medical data mining.We demonstrate how autoencoders,variational autoencoders,multimodal models,attention mechanisms,transformers,and graph neural networks enable pattern analysis and recognition across all omics data.Deep learning has been found to be effective in illness classification,biomarker identification,gene network learning,and therapeutic efficacy prediction.We also consider critical problems like as data quality,model explainability,whether findings can be repeated,and computational power requirements.We now consider future elements of combining omics with clinical and imaging data,explainable AI,federated learning,and real-time diagnostics.Overall,this study emphasises the need of collaborating across disciplines to advance deep learning-based multi-omics research for precision medicine and comprehending complicated disorders.
基金funded by Ongoing Research Funding Program for Project number(ORF-2025-648),King Saud University,Riyadh,Saudi Arabia.
文摘Heart disease remains a leading cause of mortality worldwide,emphasizing the urgent need for reliable and interpretable predictive models to support early diagnosis and timely intervention.However,existing Deep Learning(DL)approaches often face several limitations,including inefficient feature extraction,class imbalance,suboptimal classification performance,and limited interpretability,which collectively hinder their deployment in clinical settings.To address these challenges,we propose a novel DL framework for heart disease prediction that integrates a comprehensive preprocessing pipeline with an advanced classification architecture.The preprocessing stage involves label encoding and feature scaling.To address the issue of class imbalance inherent in the personal key indicators of the heart disease dataset,the localized random affine shadowsampling technique is employed,which enhances minority class representation while minimizing overfitting.At the core of the framework lies the Deep Residual Network(DeepResNet),which employs hierarchical residual transformations to facilitate efficient feature extraction and capture complex,non-linear relationships in the data.Experimental results demonstrate that the proposed model significantly outperforms existing techniques,achieving improvements of 3.26%in accuracy,3.16%in area under the receiver operating characteristics,1.09%in recall,and 1.07%in F1-score.Furthermore,robustness is validated using 10-fold crossvalidation,confirming the model’s generalizability across diverse data distributions.Moreover,model interpretability is ensured through the integration of Shapley additive explanations and local interpretable model-agnostic explanations,offering valuable insights into the contribution of individual features to model predictions.Overall,the proposed DL framework presents a robust,interpretable,and clinically applicable solution for heart disease prediction.
基金National Science and Technology Council,the Republic of China,under grants NSTC 113-2221-E-194-011-MY3 and Research Center on Artificial Intelligence and Sustainability,National Chung Cheng University under the research project grant titled“Generative Digital Twin System Design for Sustainable Smart City Development in Taiwan.
文摘Modern manufacturing processes have become more reliant on automation because of the accelerated transition from Industry 3.0 to Industry 4.0.Manual inspection of products on assembly lines remains inefficient,prone to errors and lacks consistency,emphasizing the need for a reliable and automated inspection system.Leveraging both object detection and image segmentation approaches,this research proposes a vision-based solution for the detection of various kinds of tools in the toolkit using deep learning(DL)models.Two Intel RealSense D455f depth cameras were arranged in a top down configuration to capture both RGB and depth images of the toolkits.After applying multiple constraints and enhancing them through preprocessing and augmentation,a dataset consisting of 3300 annotated RGB-D photos was generated.Several DL models were selected through a comprehensive assessment of mean Average Precision(mAP),precision-recall equilibrium,inference latency(target≥30 FPS),and computational burden,resulting in a preference for YOLO and Region-based Convolutional Neural Networks(R-CNN)variants over ViT-based models due to the latter’s increased latency and resource requirements.YOLOV5,YOLOV8,YOLOV11,Faster R-CNN,and Mask R-CNN were trained on the annotated dataset and evaluated using key performance metrics(Recall,Accuracy,F1-score,and Precision).YOLOV11 demonstrated balanced excellence with 93.0%precision,89.9%recall,and a 90.6%F1-score in object detection,as well as 96.9%precision,95.3%recall,and a 96.5%F1-score in instance segmentation with an average inference time of 25 ms per frame(≈40 FPS),demonstrating real-time performance.Leveraging these results,a YOLOV11-based windows application was successfully deployed in a real-time assembly line environment,where it accurately processed live video streams to detect and segment tools within toolkits,demonstrating its practical effectiveness in industrial automation.The application is capable of precisely measuring socket dimensions by utilising edge detection techniques on YOLOv11 segmentation masks,in addition to detection and segmentation.This makes it possible to do specification-level quality control right on the assembly line,which improves the ability to examine things in real time.The implementation is a big step forward for intelligent manufacturing in the Industry 4.0 paradigm.It provides a scalable,efficient,and accurate way to do automated inspection and dimensional verification activities.
基金supported by Princess Nourah bint Abdulrahman University Researchers Supporting Project Number(PNURSP2025R97)Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia.
文摘The exponential growth of the Internet of Things(IoT)has introduced significant security challenges,with zero-day attacks emerging as one of the most critical and challenging threats.Traditional Machine Learning(ML)and Deep Learning(DL)techniques have demonstrated promising early detection capabilities.However,their effectiveness is limited when handling the vast volumes of IoT-generated data due to scalability constraints,high computational costs,and the costly time-intensive process of data labeling.To address these challenges,this study proposes a Federated Learning(FL)framework that leverages collaborative and hybrid supervised learning to enhance cyber threat detection in IoT networks.By employing Deep Neural Networks(DNNs)and decentralized model training,the approach reduces computational complexity while improving detection accuracy.The proposed model demonstrates robust performance,achieving accuracies of 94.34%,99.95%,and 87.94%on the publicly available kitsune,Bot-IoT,and UNSW-NB15 datasets,respectively.Furthermore,its ability to detect zero-day attacks is validated through evaluations on two additional benchmark datasets,TON-IoT and IoT-23,using a Deep Federated Learning(DFL)framework,underscoring the generalization and effectiveness of the model in heterogeneous and decentralized IoT environments.Experimental results demonstrate superior performance over existing methods,establishing the proposed framework as an efficient and scalable solution for IoT security.