Modern manufacturing processes have become more reliant on automation because of the accelerated transition from Industry 3.0 to Industry 4.0.Manual inspection of products on assembly lines remains inefficient,prone t...Modern manufacturing processes have become more reliant on automation because of the accelerated transition from Industry 3.0 to Industry 4.0.Manual inspection of products on assembly lines remains inefficient,prone to errors and lacks consistency,emphasizing the need for a reliable and automated inspection system.Leveraging both object detection and image segmentation approaches,this research proposes a vision-based solution for the detection of various kinds of tools in the toolkit using deep learning(DL)models.Two Intel RealSense D455f depth cameras were arranged in a top down configuration to capture both RGB and depth images of the toolkits.After applying multiple constraints and enhancing them through preprocessing and augmentation,a dataset consisting of 3300 annotated RGB-D photos was generated.Several DL models were selected through a comprehensive assessment of mean Average Precision(mAP),precision-recall equilibrium,inference latency(target≥30 FPS),and computational burden,resulting in a preference for YOLO and Region-based Convolutional Neural Networks(R-CNN)variants over ViT-based models due to the latter’s increased latency and resource requirements.YOLOV5,YOLOV8,YOLOV11,Faster R-CNN,and Mask R-CNN were trained on the annotated dataset and evaluated using key performance metrics(Recall,Accuracy,F1-score,and Precision).YOLOV11 demonstrated balanced excellence with 93.0%precision,89.9%recall,and a 90.6%F1-score in object detection,as well as 96.9%precision,95.3%recall,and a 96.5%F1-score in instance segmentation with an average inference time of 25 ms per frame(≈40 FPS),demonstrating real-time performance.Leveraging these results,a YOLOV11-based windows application was successfully deployed in a real-time assembly line environment,where it accurately processed live video streams to detect and segment tools within toolkits,demonstrating its practical effectiveness in industrial automation.The application is capable of precisely measuring socket dimensions by utilising edge detection techniques on YOLOv11 segmentation masks,in addition to detection and segmentation.This makes it possible to do specification-level quality control right on the assembly line,which improves the ability to examine things in real time.The implementation is a big step forward for intelligent manufacturing in the Industry 4.0 paradigm.It provides a scalable,efficient,and accurate way to do automated inspection and dimensional verification activities.展开更多
This systematic review aims to comprehensively examine and compare deep learning methods for brain tumor segmentation and classification using MRI and other imaging modalities,focusing on recent trends from 2022 to 20...This systematic review aims to comprehensively examine and compare deep learning methods for brain tumor segmentation and classification using MRI and other imaging modalities,focusing on recent trends from 2022 to 2025.The primary objective is to evaluate methodological advancements,model performance,dataset usage,and existing challenges in developing clinically robust AI systems.We included peer-reviewed journal articles and highimpact conference papers published between 2022 and 2025,written in English,that proposed or evaluated deep learning methods for brain tumor segmentation and/or classification.Excluded were non-open-access publications,books,and non-English articles.A structured search was conducted across Scopus,Google Scholar,Wiley,and Taylor&Francis,with the last search performed in August 2025.Risk of bias was not formally quantified but considered during full-text screening based on dataset diversity,validation methods,and availability of performance metrics.We used narrative synthesis and tabular benchmarking to compare performance metrics(e.g.,accuracy,Dice score)across model types(CNN,Transformer,Hybrid),imaging modalities,and datasets.A total of 49 studies were included(43 journal articles and 6 conference papers).These studies spanned over 9 public datasets(e.g.,BraTS,Figshare,REMBRANDT,MOLAB)and utilized a range of imaging modalities,predominantly MRI.Hybrid models,especially ResViT and UNetFormer,consistently achieved high performance,with classification accuracy exceeding 98%and segmentation Dice scores above 0.90 across multiple studies.Transformers and hybrid architectures showed increasing adoption post2023.Many studies lacked external validation and were evaluated only on a few benchmark datasets,raising concerns about generalizability and dataset bias.Few studies addressed clinical interpretability or uncertainty quantification.Despite promising results,particularly for hybrid deep learning models,widespread clinical adoption remains limited due to lack of validation,interpretability concerns,and real-world deployment barriers.展开更多
Honeycombing Lung(HCL)is a chronic lung condition marked by advanced fibrosis,resulting in enlarged air spaces with thick fibrotic walls,which are visible on Computed Tomography(CT)scans.Differentiating between normal...Honeycombing Lung(HCL)is a chronic lung condition marked by advanced fibrosis,resulting in enlarged air spaces with thick fibrotic walls,which are visible on Computed Tomography(CT)scans.Differentiating between normal lung tissue,honeycombing lungs,and Ground Glass Opacity(GGO)in CT images is often challenging for radiologists and may lead to misinterpretations.Although earlier studies have proposed models to detect and classify HCL,many faced limitations such as high computational demands,lower accuracy,and difficulty distinguishing between HCL and GGO.CT images are highly effective for lung classification due to their high resolution,3D visualization,and sensitivity to tissue density variations.This study introduces Honeycombing Lungs Network(HCL Net),a novel classification algorithm inspired by ResNet50V2 and enhanced to overcome the shortcomings of previous approaches.HCL Net incorporates additional residual blocks,refined preprocessing techniques,and selective parameter tuning to improve classification performance.The dataset,sourced from the University Malaya Medical Centre(UMMC)and verified by expert radiologists,consists of CT images of normal,honeycombing,and GGO lungs.Experimental evaluations across five assessments demonstrated that HCL Net achieved an outstanding classification accuracy of approximately 99.97%.It also recorded strong performance in other metrics,achieving 93%precision,100%sensitivity,89%specificity,and an AUC-ROC score of 97%.Comparative analysis with baseline feature engineering methods confirmed the superior efficacy of HCL Net.The model significantly reduces misclassification,particularly between honeycombing and GGO lungs,enhancing diagnostic precision and reliability in lung image analysis.展开更多
Heart disease remains a leading cause of mortality worldwide,emphasizing the urgent need for reliable and interpretable predictive models to support early diagnosis and timely intervention.However,existing Deep Learni...Heart disease remains a leading cause of mortality worldwide,emphasizing the urgent need for reliable and interpretable predictive models to support early diagnosis and timely intervention.However,existing Deep Learning(DL)approaches often face several limitations,including inefficient feature extraction,class imbalance,suboptimal classification performance,and limited interpretability,which collectively hinder their deployment in clinical settings.To address these challenges,we propose a novel DL framework for heart disease prediction that integrates a comprehensive preprocessing pipeline with an advanced classification architecture.The preprocessing stage involves label encoding and feature scaling.To address the issue of class imbalance inherent in the personal key indicators of the heart disease dataset,the localized random affine shadowsampling technique is employed,which enhances minority class representation while minimizing overfitting.At the core of the framework lies the Deep Residual Network(DeepResNet),which employs hierarchical residual transformations to facilitate efficient feature extraction and capture complex,non-linear relationships in the data.Experimental results demonstrate that the proposed model significantly outperforms existing techniques,achieving improvements of 3.26%in accuracy,3.16%in area under the receiver operating characteristics,1.09%in recall,and 1.07%in F1-score.Furthermore,robustness is validated using 10-fold crossvalidation,confirming the model’s generalizability across diverse data distributions.Moreover,model interpretability is ensured through the integration of Shapley additive explanations and local interpretable model-agnostic explanations,offering valuable insights into the contribution of individual features to model predictions.Overall,the proposed DL framework presents a robust,interpretable,and clinically applicable solution for heart disease prediction.展开更多
With the growing advancement of wireless communication technologies,WiFi-based human sensing has gained increasing attention as a non-intrusive and device-free solution.Among the available signal types,Channel State I...With the growing advancement of wireless communication technologies,WiFi-based human sensing has gained increasing attention as a non-intrusive and device-free solution.Among the available signal types,Channel State Information(CSI)offers fine-grained temporal,frequency,and spatial insights into multipath propagation,making it a crucial data source for human-centric sensing.Recently,the integration of deep learning has significantly improved the robustness and automation of feature extraction from CSI in complex environments.This paper provides a comprehensive review of deep learning-enhanced human sensing based on CSI.We first outline mainstream CSI acquisition tools and their hardware specifications,then provide a detailed discussion of preprocessing methods such as denoising,time–frequency transformation,data segmentation,and augmentation.Subsequently,we categorize deep learning approaches according to sensing tasks—namely detection,localization,and recognition—and highlight representative models across application scenarios.Finally,we examine key challenges including domain generalization,multi-user interference,and limited data availability,and we propose future research directions involving lightweight model deployment,multimodal data fusion,and semantic-level sensing.展开更多
Alzheimer’s Disease(AD)is a progressive neurodegenerative disorder that significantly affects cognitive function,making early and accurate diagnosis essential.Traditional Deep Learning(DL)-based approaches often stru...Alzheimer’s Disease(AD)is a progressive neurodegenerative disorder that significantly affects cognitive function,making early and accurate diagnosis essential.Traditional Deep Learning(DL)-based approaches often struggle with low-contrast MRI images,class imbalance,and suboptimal feature extraction.This paper develops a Hybrid DL system that unites MobileNetV2 with adaptive classification methods to boost Alzheimer’s diagnosis by processing MRI scans.Image enhancement is done using Contrast-Limited Adaptive Histogram Equalization(CLAHE)and Enhanced Super-Resolution Generative Adversarial Networks(ESRGAN).A classification robustness enhancement system integrates class weighting techniques and a Matthews Correlation Coefficient(MCC)-based evaluation method into the design.The trained and validated model gives a 98.88%accuracy rate and 0.9614 MCC score.We also performed a 10-fold cross-validation experiment with an average accuracy of 96.52%(±1.51),a loss of 0.1671,and an MCC score of 0.9429 across folds.The proposed framework outperforms the state-of-the-art models with a 98%weighted F1-score while decreasing misdiagnosis results for every AD stage.The model demonstrates apparent separation abilities between AD progression stages according to the results of the confusion matrix analysis.These results validate the effectiveness of hybrid DL models with adaptive preprocessing for early and reliable Alzheimer’s diagnosis,contributing to improved computer-aided diagnosis(CAD)systems in clinical practice.展开更多
The exponential growth of the Internet of Things(IoT)has introduced significant security challenges,with zero-day attacks emerging as one of the most critical and challenging threats.Traditional Machine Learning(ML)an...The exponential growth of the Internet of Things(IoT)has introduced significant security challenges,with zero-day attacks emerging as one of the most critical and challenging threats.Traditional Machine Learning(ML)and Deep Learning(DL)techniques have demonstrated promising early detection capabilities.However,their effectiveness is limited when handling the vast volumes of IoT-generated data due to scalability constraints,high computational costs,and the costly time-intensive process of data labeling.To address these challenges,this study proposes a Federated Learning(FL)framework that leverages collaborative and hybrid supervised learning to enhance cyber threat detection in IoT networks.By employing Deep Neural Networks(DNNs)and decentralized model training,the approach reduces computational complexity while improving detection accuracy.The proposed model demonstrates robust performance,achieving accuracies of 94.34%,99.95%,and 87.94%on the publicly available kitsune,Bot-IoT,and UNSW-NB15 datasets,respectively.Furthermore,its ability to detect zero-day attacks is validated through evaluations on two additional benchmark datasets,TON-IoT and IoT-23,using a Deep Federated Learning(DFL)framework,underscoring the generalization and effectiveness of the model in heterogeneous and decentralized IoT environments.Experimental results demonstrate superior performance over existing methods,establishing the proposed framework as an efficient and scalable solution for IoT security.展开更多
Vehicle Edge Computing(VEC)and Cloud Computing(CC)significantly enhance the processing efficiency of delay-sensitive and computation-intensive applications by offloading compute-intensive tasks from resource-constrain...Vehicle Edge Computing(VEC)and Cloud Computing(CC)significantly enhance the processing efficiency of delay-sensitive and computation-intensive applications by offloading compute-intensive tasks from resource-constrained onboard devices to nearby Roadside Unit(RSU),thereby achieving lower delay and energy consumption.However,due to the limited storage capacity and energy budget of RSUs,it is challenging to meet the demands of the highly dynamic Internet of Vehicles(IoV)environment.Therefore,determining reasonable service caching and computation offloading strategies is crucial.To address this,this paper proposes a joint service caching scheme for cloud-edge collaborative IoV computation offloading.By modeling the dynamic optimization problem using Markov Decision Processes(MDP),the scheme jointly optimizes task delay,energy consumption,load balancing,and privacy entropy to achieve better quality of service.Additionally,a dynamic adaptive multi-objective deep reinforcement learning algorithm is proposed.Each Double Deep Q-Network(DDQN)agent obtains rewards for different objectives based on distinct reward functions and dynamically updates the objective weights by learning the value changes between objectives using Radial Basis Function Networks(RBFN),thereby efficiently approximating the Pareto-optimal decisions for multiple objectives.Extensive experiments demonstrate that the proposed algorithm can better coordinate the three-tier computing resources of cloud,edge,and vehicles.Compared to existing algorithms,the proposed method reduces task delay and energy consumption by 10.64%and 5.1%,respectively.展开更多
At present,energy consumption is one of the main bottlenecks in autonomous mobile robot development.To address the challenge of high energy consumption in path planning for autonomous mobile robots navigating unknown ...At present,energy consumption is one of the main bottlenecks in autonomous mobile robot development.To address the challenge of high energy consumption in path planning for autonomous mobile robots navigating unknown and complex environments,this paper proposes an Attention-Enhanced Dueling Deep Q-Network(ADDueling DQN),which integrates a multi-head attention mechanism and a prioritized experience replay strategy into a Dueling-DQN reinforcement learning framework.A multi-objective reward function,centered on energy efficiency,is designed to comprehensively consider path length,terrain slope,motion smoothness,and obstacle avoidance,enabling optimal low-energy trajectory generation in 3D space from the source.The incorporation of a multihead attention mechanism allows the model to dynamically focus on energy-critical state features—such as slope gradients and obstacle density—thereby significantly improving its ability to recognize and avoid energy-intensive paths.Additionally,the prioritized experience replay mechanism accelerates learning from key decision-making experiences,suppressing inefficient exploration and guiding the policy toward low-energy solutions more rapidly.The effectiveness of the proposed path planning algorithm is validated through simulation experiments conducted in multiple off-road scenarios.Results demonstrate that AD-Dueling DQN consistently achieves the lowest average energy consumption across all tested environments.Moreover,the proposed method exhibits faster convergence and greater training stability compared to baseline algorithms,highlighting its global optimization capability under energy-aware objectives in complex terrains.This study offers an efficient and scalable intelligent control strategy for the development of energy-conscious autonomous navigation systems.展开更多
Early identification and treatment of stroke can greatly improve patient outcomes and quality of life.Although clinical tests such as the Cincinnati Pre-hospital Stroke Scale(CPSS)and the Face Arm Speech Test(FAST)are...Early identification and treatment of stroke can greatly improve patient outcomes and quality of life.Although clinical tests such as the Cincinnati Pre-hospital Stroke Scale(CPSS)and the Face Arm Speech Test(FAST)are commonly used for stroke screening,accurate administration is dependent on specialized training.In this study,we proposed a novel multimodal deep learning approach,based on the FAST,for assessing suspected stroke patients exhibiting symptoms such as limb weakness,facial paresis,and speech disorders in acute settings.We collected a dataset comprising videos and audio recordings of emergency room patients performing designated limb movements,facial expressions,and speech tests based on the FAST.We compared the constructed deep learning model,which was designed to process multi-modal datasets,with six prior models that achieved good action classification performance,including the I3D,SlowFast,X3D,TPN,TimeSformer,and MViT.We found that the findings of our deep learning model had a higher clinical value compared with the other approaches.Moreover,the multi-modal model outperformed its single-module variants,highlighting the benefit of utilizing multiple types of patient data,such as action videos and speech audio.These results indicate that a multi-modal deep learning model combined with the FAST could greatly improve the accuracy and sensitivity of early stroke identification of stroke,thus providing a practical and powerful tool for assessing stroke patients in an emergency clinical setting.展开更多
Accurate channel state information(CSI)is crucial for 6G wireless communication systems to accommodate the growing demands of mobile broadband services.In massive multiple-input multiple-output(MIMO)systems,traditiona...Accurate channel state information(CSI)is crucial for 6G wireless communication systems to accommodate the growing demands of mobile broadband services.In massive multiple-input multiple-output(MIMO)systems,traditional CSI feedback approaches face challenges such as performance degradation due to feedback delay and channel aging caused by user mobility.To address these issues,we propose a novel spatio-temporal predictive network(STPNet)that jointly integrates CSI feedback and prediction modules.STPNet employs stacked Inception modules to learn the spatial correlation and temporal evolution of CSI,which captures both the local and the global spatiotemporal features.In addition,the signal-to-noise ratio(SNR)adaptive module is designed to adapt flexibly to diverse feedback channel conditions.Simulation results demonstrate that STPNet outperforms existing channel prediction methods under various channel conditions.展开更多
The current deep learning models for braced excavation cannot predict deformation from the beginning of excavation due to the need for a substantial corpus of sufficient historical data for training purposes.To addres...The current deep learning models for braced excavation cannot predict deformation from the beginning of excavation due to the need for a substantial corpus of sufficient historical data for training purposes.To address this issue,this study proposes a transfer learning model based on a sequence-to-sequence twodimensional(2D)convolutional long short-term memory neural network(S2SCL2D).The model can use the existing data from other adjacent similar excavations to achieve wall deflection prediction once a limited amount of monitoring data from the target excavation has been recorded.In the absence of adjacent excavation data,numerical simulation data from the target project can be employed instead.A weight update strategy is proposed to improve the prediction accuracy by integrating the stochastic gradient masking with an early stopping mechanism.To illustrate the proposed methodology,an excavation project in Hangzhou,China is adopted.The proposed deep transfer learning model,which uses either adjacent excavation data or numerical simulation data as the source domain,shows a significant improvement in performance when compared to the non-transfer learning model.Using the simulation data from the target project even leads to better prediction performance than using the actual monitoring data from other adjacent excavations.The results demonstrate that the proposed model can reasonably predict the deformation with limited data from the target project.展开更多
Negative logarithm of the acid dissociation constant(pK_(a))significantly influences the absorption,dis-tribution,metabolism,excretion,and toxicity(ADMET)properties of molecules and is a crucial indicator in drug rese...Negative logarithm of the acid dissociation constant(pK_(a))significantly influences the absorption,dis-tribution,metabolism,excretion,and toxicity(ADMET)properties of molecules and is a crucial indicator in drug research.Given the rapid and accurate characteristics of computational methods,their role in predicting drug properties is increasingly important.Although many pK_(a) prediction models currently exist,they often focus on enhancing model precision while neglecting interpretability.In this study,we present GraFpKa,a pK_(a) prediction model using graph neural networks(GNNs)and molecular finger-prints.The results show that our acidic and basic models achieved mean absolute errors(MAEs)of 0.621 and 0.402,respectively,on the test set,demonstrating good predictive performance.Notably,to improve interpretability,GraFpKa also incorporates Integrated Gradients(IGs),providing a clearer visual description of the atoms significantly affecting the pK_(a) values.The high reliability and interpretability of GraFpKa ensure accurate pKa predictions while also facilitating a deeper understanding of the relation-ship between molecular structure and pK_(a) values,making it a valuable tool in the field of pK_(a) prediction.展开更多
Deep learning-based object detection has revolutionized various fields,including agriculture.This paper presents a systematic review based on the PRISMA 2020 approach for object detection techniques in agriculture by ...Deep learning-based object detection has revolutionized various fields,including agriculture.This paper presents a systematic review based on the PRISMA 2020 approach for object detection techniques in agriculture by exploring the evolution of different methods and applications over the past three years,highlighting the shift from conventional computer vision to deep learning-based methodologies owing to their enhanced efficacy in real time.The review emphasizes the integration of advanced models,such as You Only Look Once(YOLO)v9,v10,EfficientDet,Transformer-based models,and hybrid frameworks that improve the precision,accuracy,and scalability for crop monitoring and disease detection.The review also highlights benchmark datasets and evaluation metrics.It addresses limitations,like domain adaptation challenges,dataset heterogeneity,and occlusion,while offering insights into prospective research avenues,such as multimodal learning,explainable AI,and federated learning.Furthermore,the main aim of this paper is to serve as a thorough resource guide for scientists,researchers,and stakeholders for implementing deep learning-based object detection methods for the development of intelligent,robust,and sustainable agricultural systems.展开更多
Biomedical big data,characterized by its massive scale,multi-dimensionality,and heterogeneity,offers novel perspectives for disease research,elucidates biological principles,and simultaneously prompts changes in relat...Biomedical big data,characterized by its massive scale,multi-dimensionality,and heterogeneity,offers novel perspectives for disease research,elucidates biological principles,and simultaneously prompts changes in related research methodologies.Biomedical ontology,as a shared formal conceptual system,not only offers standardized terms for multi-source biomedical data but also provides a solid data foundation and framework for biomedical research.In this review,we summarize enrichment analysis and deep learning for biomedical ontology based on its structure and semantic annotation properties,highlighting how technological advancements are enabling the more comprehensive use of ontology information.Enrichment analysis represents an important application of ontology to elucidate the potential biological significance for a particular molecular list.Deep learning,on the other hand,represents an increasingly powerful analytical tool that can be more widely combined with ontology for analysis and prediction.With the continuous evolution of big data technologies,the integration of these technologies with biomedical ontologies is opening up exciting new possibilities for advancing biomedical research.展开更多
To overcome the limitations of low efficiency and reliance on manual processes in the measurement of geometric parameters for bridge prefabricated components,a method based on deep learning and computer vision is deve...To overcome the limitations of low efficiency and reliance on manual processes in the measurement of geometric parameters for bridge prefabricated components,a method based on deep learning and computer vision is developed to identify the geometric parameters.The study utilizes a common precast element for highway bridges as the research subject.First,edge feature points of the bridge component section are extracted from images of the precast component cross-sections by combining the Canny operator with mathematical morphology.Subsequently,a deep learning model is developed to identify the geometric parameters of the precast components using the extracted edge coordinates from the images as input and the predefined control parameters of the bridge section as output.A dataset is generated by varying the control parameters and noise levels for model training.Finally,field measurements are conducted to validate the accuracy of the developed method.The results indicate that the developed method effectively identifies the geometric parameters of bridge precast components,with an error rate maintained within 5%.展开更多
Finding materials with specific properties is a hot topic in materials science.Traditional materials design relies on empirical and trial-and-error methods,requiring extensive experiments and time,resulting in high co...Finding materials with specific properties is a hot topic in materials science.Traditional materials design relies on empirical and trial-and-error methods,requiring extensive experiments and time,resulting in high costs.With the development of physics,statistics,computer science,and other fields,machine learning offers opportunities for systematically discovering new materials.Especially through machine learning-based inverse design,machine learning algorithms analyze the mapping relationships between materials and their properties to find materials with desired properties.This paper first outlines the basic concepts of materials inverse design and the challenges faced by machine learning-based approaches to materials inverse design.Then,three main inverse design methods—exploration-based,model-based,and optimization-based—are analyzed in the context of different application scenarios.Finally,the applications of inverse design methods in alloys,optical materials,and acoustic materials are elaborated on,and the prospects for materials inverse design are discussed.The authors hope to accelerate the discovery of new materials and provide new possibilities for advancing materials science and innovative design methods.展开更多
Topographic maps,as essential tools and sources of information for geographic research,contain precise spatial locations and rich map features,and they illustrate spatio-temporal information on the distribution and di...Topographic maps,as essential tools and sources of information for geographic research,contain precise spatial locations and rich map features,and they illustrate spatio-temporal information on the distribution and differences of various surface features.Currently,topographic maps are mainly stored in raster and vector formats.Extraction of the spatio-temporal knowledge in the maps—such as spatial distribution patterns,feature relationships,and dynamic evolution—still primarily relies on manual interpretation.However,manual interpretation is time-consuming and laborious,especially for large-scale,long-term map knowledge extraction and application.With the development of artificial intelligence technology,it is possible to improve the automation level of map knowledge interpretation.Therefore,the present study proposes an automatic interpretation method for raster topographic map knowledge based on deep learning.To address the limitations of current data-driven intelligent technology in learning map spatial relations and cognitive logic,we establish a formal description of map knowledge by mapping the relationship between map knowledge and features,thereby ensuring interpretation accuracy.Subsequently,deep learning techniques are employed to extract map features automatically,and the spatio-temporal knowledge is constructed by combining formal descriptions of geographic feature knowledge.Validation experiments demonstrate that the proposed method effectively achieves automatic interpretation of spatio-temporal knowledge of geographic features in maps,with an accuracy exceeding 80%.The findings of the present study contribute to machine understanding of spatio-temporal differences in map knowledge and advances the intelligent interpretation and utilization of cartographic information.展开更多
Multimodal deep learning has emerged as a key paradigm in contemporary medical diagnostics,advancing precision medicine by enabling integration and learning from diverse data sources.The exponential growth of high-dim...Multimodal deep learning has emerged as a key paradigm in contemporary medical diagnostics,advancing precision medicine by enabling integration and learning from diverse data sources.The exponential growth of high-dimensional healthcare data,encompassing genomic,transcriptomic,and other omics profiles,as well as radiological imaging and histopathological slides,makes this approach increasingly important because,when examined separately,these data sources only offer a fragmented picture of intricate disease processes.Multimodal deep learning leverages the complementary properties of multiple data modalities to enable more accurate prognostic modeling,more robust disease characterization,and improved treatment decision-making.This review provides a comprehensive overview of the current state of multimodal deep learning approaches in medical diagnosis.We classify and examine important application domains,such as(1)radiology,where automated report generation and lesion detection are facilitated by image-text integration;(2)histopathology,where fusion models improve tumor classification and grading;and(3)multi-omics,where molecular subtypes and latent biomarkers are revealed through cross-modal learning.We provide an overview of representative research,methodological advancements,and clinical consequences for each domain.Additionally,we critically analyzed the fundamental issues preventing wider adoption,including computational complexity(particularly in training scalable,multi-branch networks),data heterogeneity(resulting from modality-specific noise,resolution variations,and inconsistent annotations),and the challenge of maintaining significant cross-modal correlations during fusion.These problems impede interpretability,which is crucial for clinical trust and use,in addition to performance and generalizability.Lastly,we outline important areas for future research,including the development of standardized protocols for harmonizing data,the creation of lightweight and interpretable fusion architectures,the integration of real-time clinical decision support systems,and the promotion of cooperation for federated multimodal learning.Our goal is to provide researchers and clinicians with a concise overview of the field’s present state,enduring constraints,and exciting directions for further research through this review.展开更多
Bearing is an indispensable key component in mechanical equipment,and its working state is directly related to the stability and safety of the whole equipment.In recent years,with the rapid development of artificial i...Bearing is an indispensable key component in mechanical equipment,and its working state is directly related to the stability and safety of the whole equipment.In recent years,with the rapid development of artificial intelligence technology,especially the breakthrough of deep learning technology,it provides a new idea for bearing fault diagnosis.Deep learning can automatically learn features from a large amount of data,has a strong nonlinear modeling ability,and can effectively solve the problems existing in traditional methods.Aiming at the key problems in bearing fault diagnosis,this paper studies the fault diagnosis method based on deep learning,which not only provides a new solution for bearing fault diagnosis but also provides a reference for the application of deep learning in other mechanical fault diagnosis fields.展开更多
文摘Modern manufacturing processes have become more reliant on automation because of the accelerated transition from Industry 3.0 to Industry 4.0.Manual inspection of products on assembly lines remains inefficient,prone to errors and lacks consistency,emphasizing the need for a reliable and automated inspection system.Leveraging both object detection and image segmentation approaches,this research proposes a vision-based solution for the detection of various kinds of tools in the toolkit using deep learning(DL)models.Two Intel RealSense D455f depth cameras were arranged in a top down configuration to capture both RGB and depth images of the toolkits.After applying multiple constraints and enhancing them through preprocessing and augmentation,a dataset consisting of 3300 annotated RGB-D photos was generated.Several DL models were selected through a comprehensive assessment of mean Average Precision(mAP),precision-recall equilibrium,inference latency(target≥30 FPS),and computational burden,resulting in a preference for YOLO and Region-based Convolutional Neural Networks(R-CNN)variants over ViT-based models due to the latter’s increased latency and resource requirements.YOLOV5,YOLOV8,YOLOV11,Faster R-CNN,and Mask R-CNN were trained on the annotated dataset and evaluated using key performance metrics(Recall,Accuracy,F1-score,and Precision).YOLOV11 demonstrated balanced excellence with 93.0%precision,89.9%recall,and a 90.6%F1-score in object detection,as well as 96.9%precision,95.3%recall,and a 96.5%F1-score in instance segmentation with an average inference time of 25 ms per frame(≈40 FPS),demonstrating real-time performance.Leveraging these results,a YOLOV11-based windows application was successfully deployed in a real-time assembly line environment,where it accurately processed live video streams to detect and segment tools within toolkits,demonstrating its practical effectiveness in industrial automation.The application is capable of precisely measuring socket dimensions by utilising edge detection techniques on YOLOv11 segmentation masks,in addition to detection and segmentation.This makes it possible to do specification-level quality control right on the assembly line,which improves the ability to examine things in real time.The implementation is a big step forward for intelligent manufacturing in the Industry 4.0 paradigm.It provides a scalable,efficient,and accurate way to do automated inspection and dimensional verification activities.
文摘This systematic review aims to comprehensively examine and compare deep learning methods for brain tumor segmentation and classification using MRI and other imaging modalities,focusing on recent trends from 2022 to 2025.The primary objective is to evaluate methodological advancements,model performance,dataset usage,and existing challenges in developing clinically robust AI systems.We included peer-reviewed journal articles and highimpact conference papers published between 2022 and 2025,written in English,that proposed or evaluated deep learning methods for brain tumor segmentation and/or classification.Excluded were non-open-access publications,books,and non-English articles.A structured search was conducted across Scopus,Google Scholar,Wiley,and Taylor&Francis,with the last search performed in August 2025.Risk of bias was not formally quantified but considered during full-text screening based on dataset diversity,validation methods,and availability of performance metrics.We used narrative synthesis and tabular benchmarking to compare performance metrics(e.g.,accuracy,Dice score)across model types(CNN,Transformer,Hybrid),imaging modalities,and datasets.A total of 49 studies were included(43 journal articles and 6 conference papers).These studies spanned over 9 public datasets(e.g.,BraTS,Figshare,REMBRANDT,MOLAB)and utilized a range of imaging modalities,predominantly MRI.Hybrid models,especially ResViT and UNetFormer,consistently achieved high performance,with classification accuracy exceeding 98%and segmentation Dice scores above 0.90 across multiple studies.Transformers and hybrid architectures showed increasing adoption post2023.Many studies lacked external validation and were evaluated only on a few benchmark datasets,raising concerns about generalizability and dataset bias.Few studies addressed clinical interpretability or uncertainty quantification.Despite promising results,particularly for hybrid deep learning models,widespread clinical adoption remains limited due to lack of validation,interpretability concerns,and real-world deployment barriers.
文摘Honeycombing Lung(HCL)is a chronic lung condition marked by advanced fibrosis,resulting in enlarged air spaces with thick fibrotic walls,which are visible on Computed Tomography(CT)scans.Differentiating between normal lung tissue,honeycombing lungs,and Ground Glass Opacity(GGO)in CT images is often challenging for radiologists and may lead to misinterpretations.Although earlier studies have proposed models to detect and classify HCL,many faced limitations such as high computational demands,lower accuracy,and difficulty distinguishing between HCL and GGO.CT images are highly effective for lung classification due to their high resolution,3D visualization,and sensitivity to tissue density variations.This study introduces Honeycombing Lungs Network(HCL Net),a novel classification algorithm inspired by ResNet50V2 and enhanced to overcome the shortcomings of previous approaches.HCL Net incorporates additional residual blocks,refined preprocessing techniques,and selective parameter tuning to improve classification performance.The dataset,sourced from the University Malaya Medical Centre(UMMC)and verified by expert radiologists,consists of CT images of normal,honeycombing,and GGO lungs.Experimental evaluations across five assessments demonstrated that HCL Net achieved an outstanding classification accuracy of approximately 99.97%.It also recorded strong performance in other metrics,achieving 93%precision,100%sensitivity,89%specificity,and an AUC-ROC score of 97%.Comparative analysis with baseline feature engineering methods confirmed the superior efficacy of HCL Net.The model significantly reduces misclassification,particularly between honeycombing and GGO lungs,enhancing diagnostic precision and reliability in lung image analysis.
基金funded by Ongoing Research Funding Program for Project number(ORF-2025-648),King Saud University,Riyadh,Saudi Arabia.
文摘Heart disease remains a leading cause of mortality worldwide,emphasizing the urgent need for reliable and interpretable predictive models to support early diagnosis and timely intervention.However,existing Deep Learning(DL)approaches often face several limitations,including inefficient feature extraction,class imbalance,suboptimal classification performance,and limited interpretability,which collectively hinder their deployment in clinical settings.To address these challenges,we propose a novel DL framework for heart disease prediction that integrates a comprehensive preprocessing pipeline with an advanced classification architecture.The preprocessing stage involves label encoding and feature scaling.To address the issue of class imbalance inherent in the personal key indicators of the heart disease dataset,the localized random affine shadowsampling technique is employed,which enhances minority class representation while minimizing overfitting.At the core of the framework lies the Deep Residual Network(DeepResNet),which employs hierarchical residual transformations to facilitate efficient feature extraction and capture complex,non-linear relationships in the data.Experimental results demonstrate that the proposed model significantly outperforms existing techniques,achieving improvements of 3.26%in accuracy,3.16%in area under the receiver operating characteristics,1.09%in recall,and 1.07%in F1-score.Furthermore,robustness is validated using 10-fold crossvalidation,confirming the model’s generalizability across diverse data distributions.Moreover,model interpretability is ensured through the integration of Shapley additive explanations and local interpretable model-agnostic explanations,offering valuable insights into the contribution of individual features to model predictions.Overall,the proposed DL framework presents a robust,interpretable,and clinically applicable solution for heart disease prediction.
基金supported by National Natural Science Foundation of China(NSFC)under grant U23A20310.
文摘With the growing advancement of wireless communication technologies,WiFi-based human sensing has gained increasing attention as a non-intrusive and device-free solution.Among the available signal types,Channel State Information(CSI)offers fine-grained temporal,frequency,and spatial insights into multipath propagation,making it a crucial data source for human-centric sensing.Recently,the integration of deep learning has significantly improved the robustness and automation of feature extraction from CSI in complex environments.This paper provides a comprehensive review of deep learning-enhanced human sensing based on CSI.We first outline mainstream CSI acquisition tools and their hardware specifications,then provide a detailed discussion of preprocessing methods such as denoising,time–frequency transformation,data segmentation,and augmentation.Subsequently,we categorize deep learning approaches according to sensing tasks—namely detection,localization,and recognition—and highlight representative models across application scenarios.Finally,we examine key challenges including domain generalization,multi-user interference,and limited data availability,and we propose future research directions involving lightweight model deployment,multimodal data fusion,and semantic-level sensing.
基金funded by the Deanship of Graduate Studies and Scientific Research at Jouf University under grant No.(DGSSR-2025-02-01295).
文摘Alzheimer’s Disease(AD)is a progressive neurodegenerative disorder that significantly affects cognitive function,making early and accurate diagnosis essential.Traditional Deep Learning(DL)-based approaches often struggle with low-contrast MRI images,class imbalance,and suboptimal feature extraction.This paper develops a Hybrid DL system that unites MobileNetV2 with adaptive classification methods to boost Alzheimer’s diagnosis by processing MRI scans.Image enhancement is done using Contrast-Limited Adaptive Histogram Equalization(CLAHE)and Enhanced Super-Resolution Generative Adversarial Networks(ESRGAN).A classification robustness enhancement system integrates class weighting techniques and a Matthews Correlation Coefficient(MCC)-based evaluation method into the design.The trained and validated model gives a 98.88%accuracy rate and 0.9614 MCC score.We also performed a 10-fold cross-validation experiment with an average accuracy of 96.52%(±1.51),a loss of 0.1671,and an MCC score of 0.9429 across folds.The proposed framework outperforms the state-of-the-art models with a 98%weighted F1-score while decreasing misdiagnosis results for every AD stage.The model demonstrates apparent separation abilities between AD progression stages according to the results of the confusion matrix analysis.These results validate the effectiveness of hybrid DL models with adaptive preprocessing for early and reliable Alzheimer’s diagnosis,contributing to improved computer-aided diagnosis(CAD)systems in clinical practice.
基金supported by Princess Nourah bint Abdulrahman University Researchers Supporting Project Number(PNURSP2025R97)Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia.
文摘The exponential growth of the Internet of Things(IoT)has introduced significant security challenges,with zero-day attacks emerging as one of the most critical and challenging threats.Traditional Machine Learning(ML)and Deep Learning(DL)techniques have demonstrated promising early detection capabilities.However,their effectiveness is limited when handling the vast volumes of IoT-generated data due to scalability constraints,high computational costs,and the costly time-intensive process of data labeling.To address these challenges,this study proposes a Federated Learning(FL)framework that leverages collaborative and hybrid supervised learning to enhance cyber threat detection in IoT networks.By employing Deep Neural Networks(DNNs)and decentralized model training,the approach reduces computational complexity while improving detection accuracy.The proposed model demonstrates robust performance,achieving accuracies of 94.34%,99.95%,and 87.94%on the publicly available kitsune,Bot-IoT,and UNSW-NB15 datasets,respectively.Furthermore,its ability to detect zero-day attacks is validated through evaluations on two additional benchmark datasets,TON-IoT and IoT-23,using a Deep Federated Learning(DFL)framework,underscoring the generalization and effectiveness of the model in heterogeneous and decentralized IoT environments.Experimental results demonstrate superior performance over existing methods,establishing the proposed framework as an efficient and scalable solution for IoT security.
基金supported by Key Science and Technology Program of Henan Province,China(Grant Nos.242102210147,242102210027)Fujian Province Young and Middle aged Teacher Education Research Project(Science and Technology Category)(No.JZ240101)(Corresponding author:Dong Yuan).
文摘Vehicle Edge Computing(VEC)and Cloud Computing(CC)significantly enhance the processing efficiency of delay-sensitive and computation-intensive applications by offloading compute-intensive tasks from resource-constrained onboard devices to nearby Roadside Unit(RSU),thereby achieving lower delay and energy consumption.However,due to the limited storage capacity and energy budget of RSUs,it is challenging to meet the demands of the highly dynamic Internet of Vehicles(IoV)environment.Therefore,determining reasonable service caching and computation offloading strategies is crucial.To address this,this paper proposes a joint service caching scheme for cloud-edge collaborative IoV computation offloading.By modeling the dynamic optimization problem using Markov Decision Processes(MDP),the scheme jointly optimizes task delay,energy consumption,load balancing,and privacy entropy to achieve better quality of service.Additionally,a dynamic adaptive multi-objective deep reinforcement learning algorithm is proposed.Each Double Deep Q-Network(DDQN)agent obtains rewards for different objectives based on distinct reward functions and dynamically updates the objective weights by learning the value changes between objectives using Radial Basis Function Networks(RBFN),thereby efficiently approximating the Pareto-optimal decisions for multiple objectives.Extensive experiments demonstrate that the proposed algorithm can better coordinate the three-tier computing resources of cloud,edge,and vehicles.Compared to existing algorithms,the proposed method reduces task delay and energy consumption by 10.64%and 5.1%,respectively.
文摘At present,energy consumption is one of the main bottlenecks in autonomous mobile robot development.To address the challenge of high energy consumption in path planning for autonomous mobile robots navigating unknown and complex environments,this paper proposes an Attention-Enhanced Dueling Deep Q-Network(ADDueling DQN),which integrates a multi-head attention mechanism and a prioritized experience replay strategy into a Dueling-DQN reinforcement learning framework.A multi-objective reward function,centered on energy efficiency,is designed to comprehensively consider path length,terrain slope,motion smoothness,and obstacle avoidance,enabling optimal low-energy trajectory generation in 3D space from the source.The incorporation of a multihead attention mechanism allows the model to dynamically focus on energy-critical state features—such as slope gradients and obstacle density—thereby significantly improving its ability to recognize and avoid energy-intensive paths.Additionally,the prioritized experience replay mechanism accelerates learning from key decision-making experiences,suppressing inefficient exploration and guiding the policy toward low-energy solutions more rapidly.The effectiveness of the proposed path planning algorithm is validated through simulation experiments conducted in multiple off-road scenarios.Results demonstrate that AD-Dueling DQN consistently achieves the lowest average energy consumption across all tested environments.Moreover,the proposed method exhibits faster convergence and greater training stability compared to baseline algorithms,highlighting its global optimization capability under energy-aware objectives in complex terrains.This study offers an efficient and scalable intelligent control strategy for the development of energy-conscious autonomous navigation systems.
基金supported by the Ministry of Science and Technology of China,No.2020AAA0109605(to XL)Meizhou Major Scientific and Technological Innovation PlatformsProjects of Guangdong Provincial Science & Technology Plan Projects,No.2019A0102005(to HW).
文摘Early identification and treatment of stroke can greatly improve patient outcomes and quality of life.Although clinical tests such as the Cincinnati Pre-hospital Stroke Scale(CPSS)and the Face Arm Speech Test(FAST)are commonly used for stroke screening,accurate administration is dependent on specialized training.In this study,we proposed a novel multimodal deep learning approach,based on the FAST,for assessing suspected stroke patients exhibiting symptoms such as limb weakness,facial paresis,and speech disorders in acute settings.We collected a dataset comprising videos and audio recordings of emergency room patients performing designated limb movements,facial expressions,and speech tests based on the FAST.We compared the constructed deep learning model,which was designed to process multi-modal datasets,with six prior models that achieved good action classification performance,including the I3D,SlowFast,X3D,TPN,TimeSformer,and MViT.We found that the findings of our deep learning model had a higher clinical value compared with the other approaches.Moreover,the multi-modal model outperformed its single-module variants,highlighting the benefit of utilizing multiple types of patient data,such as action videos and speech audio.These results indicate that a multi-modal deep learning model combined with the FAST could greatly improve the accuracy and sensitivity of early stroke identification of stroke,thus providing a practical and powerful tool for assessing stroke patients in an emergency clinical setting.
基金supported in part by the Natural Science Foundation of China under Grant Nos.U2468201 and 62221001ZTE Industry-University-Institute Cooperation Funds under Grant No.IA20240420002。
文摘Accurate channel state information(CSI)is crucial for 6G wireless communication systems to accommodate the growing demands of mobile broadband services.In massive multiple-input multiple-output(MIMO)systems,traditional CSI feedback approaches face challenges such as performance degradation due to feedback delay and channel aging caused by user mobility.To address these issues,we propose a novel spatio-temporal predictive network(STPNet)that jointly integrates CSI feedback and prediction modules.STPNet employs stacked Inception modules to learn the spatial correlation and temporal evolution of CSI,which captures both the local and the global spatiotemporal features.In addition,the signal-to-noise ratio(SNR)adaptive module is designed to adapt flexibly to diverse feedback channel conditions.Simulation results demonstrate that STPNet outperforms existing channel prediction methods under various channel conditions.
基金supported by the National Key Research and Development Program of China(Grant No.2023YFC3009400)the National Natural Science Foundation of China(Grant Nos.42307218 and U2239251).
文摘The current deep learning models for braced excavation cannot predict deformation from the beginning of excavation due to the need for a substantial corpus of sufficient historical data for training purposes.To address this issue,this study proposes a transfer learning model based on a sequence-to-sequence twodimensional(2D)convolutional long short-term memory neural network(S2SCL2D).The model can use the existing data from other adjacent similar excavations to achieve wall deflection prediction once a limited amount of monitoring data from the target excavation has been recorded.In the absence of adjacent excavation data,numerical simulation data from the target project can be employed instead.A weight update strategy is proposed to improve the prediction accuracy by integrating the stochastic gradient masking with an early stopping mechanism.To illustrate the proposed methodology,an excavation project in Hangzhou,China is adopted.The proposed deep transfer learning model,which uses either adjacent excavation data or numerical simulation data as the source domain,shows a significant improvement in performance when compared to the non-transfer learning model.Using the simulation data from the target project even leads to better prediction performance than using the actual monitoring data from other adjacent excavations.The results demonstrate that the proposed model can reasonably predict the deformation with limited data from the target project.
基金upported by the National Key Research and Development Program of China(Grant No.:2023YFF1204904)the National Natural Science Foundation of China(Grant Nos.:U23A20530 and 82173746)Shanghai Frontiers Science Center of Optogenetic Techniques for Cell Metabolism(Shanghai Municipal Education Commission,China).
文摘Negative logarithm of the acid dissociation constant(pK_(a))significantly influences the absorption,dis-tribution,metabolism,excretion,and toxicity(ADMET)properties of molecules and is a crucial indicator in drug research.Given the rapid and accurate characteristics of computational methods,their role in predicting drug properties is increasingly important.Although many pK_(a) prediction models currently exist,they often focus on enhancing model precision while neglecting interpretability.In this study,we present GraFpKa,a pK_(a) prediction model using graph neural networks(GNNs)and molecular finger-prints.The results show that our acidic and basic models achieved mean absolute errors(MAEs)of 0.621 and 0.402,respectively,on the test set,demonstrating good predictive performance.Notably,to improve interpretability,GraFpKa also incorporates Integrated Gradients(IGs),providing a clearer visual description of the atoms significantly affecting the pK_(a) values.The high reliability and interpretability of GraFpKa ensure accurate pKa predictions while also facilitating a deeper understanding of the relation-ship between molecular structure and pK_(a) values,making it a valuable tool in the field of pK_(a) prediction.
文摘Deep learning-based object detection has revolutionized various fields,including agriculture.This paper presents a systematic review based on the PRISMA 2020 approach for object detection techniques in agriculture by exploring the evolution of different methods and applications over the past three years,highlighting the shift from conventional computer vision to deep learning-based methodologies owing to their enhanced efficacy in real time.The review emphasizes the integration of advanced models,such as You Only Look Once(YOLO)v9,v10,EfficientDet,Transformer-based models,and hybrid frameworks that improve the precision,accuracy,and scalability for crop monitoring and disease detection.The review also highlights benchmark datasets and evaluation metrics.It addresses limitations,like domain adaptation challenges,dataset heterogeneity,and occlusion,while offering insights into prospective research avenues,such as multimodal learning,explainable AI,and federated learning.Furthermore,the main aim of this paper is to serve as a thorough resource guide for scientists,researchers,and stakeholders for implementing deep learning-based object detection methods for the development of intelligent,robust,and sustainable agricultural systems.
基金supported by the National Natural Science Foundation of China(61902095).
文摘Biomedical big data,characterized by its massive scale,multi-dimensionality,and heterogeneity,offers novel perspectives for disease research,elucidates biological principles,and simultaneously prompts changes in related research methodologies.Biomedical ontology,as a shared formal conceptual system,not only offers standardized terms for multi-source biomedical data but also provides a solid data foundation and framework for biomedical research.In this review,we summarize enrichment analysis and deep learning for biomedical ontology based on its structure and semantic annotation properties,highlighting how technological advancements are enabling the more comprehensive use of ontology information.Enrichment analysis represents an important application of ontology to elucidate the potential biological significance for a particular molecular list.Deep learning,on the other hand,represents an increasingly powerful analytical tool that can be more widely combined with ontology for analysis and prediction.With the continuous evolution of big data technologies,the integration of these technologies with biomedical ontologies is opening up exciting new possibilities for advancing biomedical research.
基金The National Natural Science Foundation of China(No.52338011,52378291)Young Elite Scientists Sponsorship Program by CAST(No.2022-2024QNRC0101).
文摘To overcome the limitations of low efficiency and reliance on manual processes in the measurement of geometric parameters for bridge prefabricated components,a method based on deep learning and computer vision is developed to identify the geometric parameters.The study utilizes a common precast element for highway bridges as the research subject.First,edge feature points of the bridge component section are extracted from images of the precast component cross-sections by combining the Canny operator with mathematical morphology.Subsequently,a deep learning model is developed to identify the geometric parameters of the precast components using the extracted edge coordinates from the images as input and the predefined control parameters of the bridge section as output.A dataset is generated by varying the control parameters and noise levels for model training.Finally,field measurements are conducted to validate the accuracy of the developed method.The results indicate that the developed method effectively identifies the geometric parameters of bridge precast components,with an error rate maintained within 5%.
基金funded by theNationalNatural Science Foundation of China(52061020)Major Science and Technology Projects in Yunnan Province(202302AG050009)Yunnan Fundamental Research Projects(202301AV070003).
文摘Finding materials with specific properties is a hot topic in materials science.Traditional materials design relies on empirical and trial-and-error methods,requiring extensive experiments and time,resulting in high costs.With the development of physics,statistics,computer science,and other fields,machine learning offers opportunities for systematically discovering new materials.Especially through machine learning-based inverse design,machine learning algorithms analyze the mapping relationships between materials and their properties to find materials with desired properties.This paper first outlines the basic concepts of materials inverse design and the challenges faced by machine learning-based approaches to materials inverse design.Then,three main inverse design methods—exploration-based,model-based,and optimization-based—are analyzed in the context of different application scenarios.Finally,the applications of inverse design methods in alloys,optical materials,and acoustic materials are elaborated on,and the prospects for materials inverse design are discussed.The authors hope to accelerate the discovery of new materials and provide new possibilities for advancing materials science and innovative design methods.
基金Deep-time Digital Earth(DDE)Big Science Program(No.GJ-C03-SGF-2025-004)National Natural Science Foundation of China(No.42394063)Sichuan Science and Technology Program(No.2025ZNSFSC0325).
文摘Topographic maps,as essential tools and sources of information for geographic research,contain precise spatial locations and rich map features,and they illustrate spatio-temporal information on the distribution and differences of various surface features.Currently,topographic maps are mainly stored in raster and vector formats.Extraction of the spatio-temporal knowledge in the maps—such as spatial distribution patterns,feature relationships,and dynamic evolution—still primarily relies on manual interpretation.However,manual interpretation is time-consuming and laborious,especially for large-scale,long-term map knowledge extraction and application.With the development of artificial intelligence technology,it is possible to improve the automation level of map knowledge interpretation.Therefore,the present study proposes an automatic interpretation method for raster topographic map knowledge based on deep learning.To address the limitations of current data-driven intelligent technology in learning map spatial relations and cognitive logic,we establish a formal description of map knowledge by mapping the relationship between map knowledge and features,thereby ensuring interpretation accuracy.Subsequently,deep learning techniques are employed to extract map features automatically,and the spatio-temporal knowledge is constructed by combining formal descriptions of geographic feature knowledge.Validation experiments demonstrate that the proposed method effectively achieves automatic interpretation of spatio-temporal knowledge of geographic features in maps,with an accuracy exceeding 80%.The findings of the present study contribute to machine understanding of spatio-temporal differences in map knowledge and advances the intelligent interpretation and utilization of cartographic information.
文摘Multimodal deep learning has emerged as a key paradigm in contemporary medical diagnostics,advancing precision medicine by enabling integration and learning from diverse data sources.The exponential growth of high-dimensional healthcare data,encompassing genomic,transcriptomic,and other omics profiles,as well as radiological imaging and histopathological slides,makes this approach increasingly important because,when examined separately,these data sources only offer a fragmented picture of intricate disease processes.Multimodal deep learning leverages the complementary properties of multiple data modalities to enable more accurate prognostic modeling,more robust disease characterization,and improved treatment decision-making.This review provides a comprehensive overview of the current state of multimodal deep learning approaches in medical diagnosis.We classify and examine important application domains,such as(1)radiology,where automated report generation and lesion detection are facilitated by image-text integration;(2)histopathology,where fusion models improve tumor classification and grading;and(3)multi-omics,where molecular subtypes and latent biomarkers are revealed through cross-modal learning.We provide an overview of representative research,methodological advancements,and clinical consequences for each domain.Additionally,we critically analyzed the fundamental issues preventing wider adoption,including computational complexity(particularly in training scalable,multi-branch networks),data heterogeneity(resulting from modality-specific noise,resolution variations,and inconsistent annotations),and the challenge of maintaining significant cross-modal correlations during fusion.These problems impede interpretability,which is crucial for clinical trust and use,in addition to performance and generalizability.Lastly,we outline important areas for future research,including the development of standardized protocols for harmonizing data,the creation of lightweight and interpretable fusion architectures,the integration of real-time clinical decision support systems,and the promotion of cooperation for federated multimodal learning.Our goal is to provide researchers and clinicians with a concise overview of the field’s present state,enduring constraints,and exciting directions for further research through this review.
文摘Bearing is an indispensable key component in mechanical equipment,and its working state is directly related to the stability and safety of the whole equipment.In recent years,with the rapid development of artificial intelligence technology,especially the breakthrough of deep learning technology,it provides a new idea for bearing fault diagnosis.Deep learning can automatically learn features from a large amount of data,has a strong nonlinear modeling ability,and can effectively solve the problems existing in traditional methods.Aiming at the key problems in bearing fault diagnosis,this paper studies the fault diagnosis method based on deep learning,which not only provides a new solution for bearing fault diagnosis but also provides a reference for the application of deep learning in other mechanical fault diagnosis fields.