Deep neural networks(DNNs)are effective in solving both forward and inverse problems for nonlinear partial differential equations(PDEs).However,conventional DNNs are not effective in handling problems such as delay di...Deep neural networks(DNNs)are effective in solving both forward and inverse problems for nonlinear partial differential equations(PDEs).However,conventional DNNs are not effective in handling problems such as delay differential equations(DDEs)and delay integrodifferential equations(DIDEs)with constant delays,primarily due to their low regularity at delayinduced breaking points.In this paper,a DNN method that combines multi-task learning(MTL)which is proposed to solve both the forward and inverse problems of DIDEs.The core idea of this approach is to divide the original equation into multiple tasks based on the delay,using auxiliary outputs to represent the integral terms,followed by the use of MTL to seamlessly incorporate the properties at the breaking points into the loss function.Furthermore,given the increased training dificulty associated with multiple tasks and outputs,we employ a sequential training scheme to reduce training complexity and provide reference solutions for subsequent tasks.This approach significantly enhances the approximation accuracy of solving DIDEs with DNNs,as demonstrated by comparisons with traditional DNN methods.We validate the effectiveness of this method through several numerical experiments,test various parameter sharing structures in MTL and compare the testing results of these structures.Finally,this method is implemented to solve the inverse problem of nonlinear DIDE and the results show that the unknown parameters of DIDE can be discovered with sparse or noisy data.展开更多
Metabolomics covers a wide range of applications in life sciences,biomedicine,and phytology.Data acquisition(to achieve high coverage and efficiency)and analysis(to pursue good classification)are two key segments invo...Metabolomics covers a wide range of applications in life sciences,biomedicine,and phytology.Data acquisition(to achieve high coverage and efficiency)and analysis(to pursue good classification)are two key segments involved in metabolomics workflows.Various chemometric approaches utilizing either pattern recognition or machine learning have been employed to separate different groups.However,insufficient feature extraction,inappropriate feature selection,overfitting,or underfitting lead to an insufficient capacity to discriminate plants that are often easily confused.Using two ginseng varieties,namely Panax japonicus(PJ)and Panax japonicus var.major(PJvm),containing the similar ginsenosides,we integrated pseudo-targeted metabolomics and deep neural network(DNN)modeling to achieve accurate species differentiation.A pseudo-targeted metabolomics approach was optimized through data acquisition mode,ion pairs generation,comparison between multiple reaction monitoring(MRM)and scheduled MRM(sMRM),and chromatographic elution gradient.In total,1980 ion pairs were monitored within 23 min,allowing for the most comprehensive ginseng metabolome analysis.The established DNN model demonstrated excellent classification performance(in terms of accuracy,precision,recall,F1 score,area under the curve,and receiver operating characteristic(ROC))using the entire metabolome data and feature-selection dataset,exhibiting superior advantages over random forest(RF),support vector machine(SVM),extreme gradient boosting(XGBoost),and multilayer perceptron(MLP).Moreover,DNNs were advantageous for automated feature learning,nonlinear modeling,adaptability,and generalization.This study confirmed practicality of the established strategy for efficient metabolomics data analysis and reliable classification performance even when using small-volume samples.This established approach holds promise for plant metabolomics and is not limited to ginseng.展开更多
Handling missing data accurately is critical in clinical research, where data quality directly impacts decision-making and patient outcomes. While deep learning (DL) techniques for data imputation have gained attentio...Handling missing data accurately is critical in clinical research, where data quality directly impacts decision-making and patient outcomes. While deep learning (DL) techniques for data imputation have gained attention, challenges remain, especially when dealing with diverse data types. In this study, we introduce a novel data imputation method based on a modified convolutional neural network, specifically, a Deep Residual-Convolutional Neural Network (DRes-CNN) architecture designed to handle missing values across various datasets. Our approach demonstrates substantial improvements over existing imputation techniques by leveraging residual connections and optimized convolutional layers to capture complex data patterns. We evaluated the model on publicly available datasets, including Medical Information Mart for Intensive Care (MIMIC-III and MIMIC-IV), which contain critical care patient data, and the Beijing Multi-Site Air Quality dataset, which measures environmental air quality. The proposed DRes-CNN method achieved a root mean square error (RMSE) of 0.00006, highlighting its high accuracy and robustness. We also compared with Low Light-Convolutional Neural Network (LL-CNN) and U-Net methods, which had RMSE values of 0.00075 and 0.00073, respectively. This represented an improvement of approximately 92% over LL-CNN and 91% over U-Net. The results showed that this DRes-CNN-based imputation method outperforms current state-of-the-art models. These results established DRes-CNN as a reliable solution for addressing missing data.展开更多
Electric vehicles(EVs)are gradually being deployed in the transportation sector.Although they have a high impact on reducing greenhouse gas emissions,their penetration is challenged by their random energy demand and d...Electric vehicles(EVs)are gradually being deployed in the transportation sector.Although they have a high impact on reducing greenhouse gas emissions,their penetration is challenged by their random energy demand and difficult scheduling of their optimal charging.To cope with these problems,this paper presents a novel approach for photovoltaic grid-connected microgrid EV charging station energy demand forecasting.The present study is part of a comprehensive framework involving emerging technologies such as drones and artificial intelligence designed to support the EVs’charging scheduling task.By using predictive algorithms for solar generation and load demand estimation,this approach aimed at ensuring dynamic and efficient energy flow between the solar energy source,the grid and the electric vehicles.The main contribution of this paper lies in developing an intelligent approach based on deep recurrent neural networks to forecast the energy demand using only its previous records.Therefore,various forecasters based on Long Short-term Memory,Gated Recurrent Unit,and their bi-directional and stacked variants were investigated using a real dataset collected from an EV charging station located at Trieste University(Italy).The developed forecasters have been evaluated and compared according to different metrics,including R,RMSE,MAE,and MAPE.We found that the obtained R values for both PV power generation and energy demand ranged between 97%and 98%.These study findings can be used for reliable and efficient decision-making on the management side of the optimal scheduling of the charging operations.展开更多
With the increasing importance of supply chain transparency,blockchain-based data has emerged as a valuable and verifiable source for analyzing procurement transaction risks.This study extends the mathematical model a...With the increasing importance of supply chain transparency,blockchain-based data has emerged as a valuable and verifiable source for analyzing procurement transaction risks.This study extends the mathematical model and proof of‘the Overall Performance Characteristics of the Supply Chain’to encompass multiple variables within blockchain data.Utilizing graph theory,the model is further developed into a single-layer neural network,which serves as the foundation for constructing two multi-layer deep learning neural network models,Feedforward Neural Network(abbreviated as FNN)and Deep Clustering Network(abbreviated as DCN).Furthermore,this study retrieves corporate data from the Chunghwa Yellow Pages online resource and Taiwan Economic Journal database(abbreviated as TEJ).These data are then virtualized using‘the Metaverse Algorithm’,and the selected virtualized blockchain variables are utilized to train a neural network model for classification.The results demonstrate that a single-layer neural network model,leveraging blockchain data and employing the Proof of Relation algorithm(abbreviated as PoR)as the activation function,effectively identifies anomalous enterprises,which constitute 7.2%of the total sample,aligning with expectations.In contrast,the multi-layer neural network models,DCN and FNN,classify an excessively large proportion of enterprises as anomalous(ranging from one-fourth to one-third),which deviates from expectations.This indicates that deep learning may still be inadequate in effectively capturing or identifying malicious corporate behaviors associated with distortions in procurement transaction data.In other words,procurement transaction blockchain data possesses intrinsic value that cannot be replaced by artificial intelligence(abbreviated as AI).展开更多
Fundoscopic diagnosis involves assessing the proper functioning of the eye’s nerves,blood vessels,retinal health,and the impact of diabetes on the optic nerves.Fundus disorders are a major global health concern,affec...Fundoscopic diagnosis involves assessing the proper functioning of the eye’s nerves,blood vessels,retinal health,and the impact of diabetes on the optic nerves.Fundus disorders are a major global health concern,affecting millions of people worldwide due to their widespread occurrence.Fundus photography generates machine-based eye images that assist in diagnosing and treating ocular diseases such as diabetic retinopathy.As a result,accurate fundus detection is essential for early diagnosis and effective treatment,helping to prevent severe complications and improve patient outcomes.To address this need,this article introduces a Derivative Model for Fundus Detection using Deep NeuralNetworks(DMFD-DNN)to enhance diagnostic precision.Thismethod selects key features for fundus detection using the least derivative,which identifies features correlating with stored fundus images.Feature filtering relies on the minimum derivative,determined by extracting both similar and varying textures.In this research,the DNN model was integrated with the derivative model.Fundus images were segmented,features were extracted,and the DNN was iteratively trained to identify fundus regions reliably.The goal was to improve the precision of fundoscopic diagnosis by training the DNN incrementally,taking into account the least possible derivative across iterations,and using outputs from previous cycles.The hidden layer of the neural network operates on the most significant derivative,which may reduce precision across iterations.These derivatives are treated as inaccurate,and the model is subsequently trained using selective features and their corresponding extractions.The proposed model outperforms previous techniques in detecting fundus regions,achieving 94.98%accuracy and 91.57%sensitivity,with a minimal error rate of 5.43%.It significantly reduces feature extraction time to 1.462 s and minimizes computational overhead,thereby improving operational efficiency and scalability.Ultimately,the proposed model enhances diagnostic precision and reduces errors,leading to more effective fundus dysfunction diagnosis and treatment.展开更多
To enhance the denoising performance of event-based sensors,we introduce a clustering-based temporal deep neural network denoising method(CBTDNN).Firstly,to cluster the sensor output data and obtain the respective clu...To enhance the denoising performance of event-based sensors,we introduce a clustering-based temporal deep neural network denoising method(CBTDNN).Firstly,to cluster the sensor output data and obtain the respective cluster centers,a combination of density-based spatial clustering of applications with noise(DBSCAN)and Kmeans++is utilized.Subsequently,long short-term memory(LSTM)is employed to fit and yield optimized cluster centers with temporal information.Lastly,based on the new cluster centers and denoising ratio,a radius threshold is set,and noise points beyond this threshold are removed.The comprehensive denoising metrics F1_score of CBTDNN have achieved 0.8931,0.7735,and 0.9215 on the traffic sequences dataset,pedestrian detection dataset,and turntable dataset,respectively.And these metrics demonstrate improvements of 49.90%,33.07%,19.31%,and 22.97%compared to four contrastive algorithms,namely nearest neighbor(NNb),nearest neighbor with polarity(NNp),Autoencoder,and multilayer perceptron denoising filter(MLPF).These results demonstrate that the proposed method enhances the denoising performance of event-based sensors.展开更多
Deep neural networks(DNNs)have found extensive applications in safety-critical artificial intelligence systems,such as autonomous driving and facial recognition systems.However,recent research has revealed their susce...Deep neural networks(DNNs)have found extensive applications in safety-critical artificial intelligence systems,such as autonomous driving and facial recognition systems.However,recent research has revealed their susceptibility to backdoors maliciously injected by adversaries.This vulnerability arises due to the intricate architecture and opacity of DNNs,resulting in numerous redundant neurons embedded within the models.Adversaries exploit these vulnerabilities to conceal malicious backdoor information within DNNs,thereby causing erroneous outputs and posing substantial threats to the efficacy of DNN-based applications.This article presents a comprehensive survey of backdoor attacks against DNNs and the countermeasure methods employed to mitigate them.Initially,we trace the evolution of the concept from traditional backdoor attacks to backdoor attacks against DNNs,highlighting the feasibility and practicality of generating backdoor attacks against DNNs.Subsequently,we provide an overview of notable works encompassing various attack and defense strategies,facilitating a comparative analysis of their approaches.Through these discussions,we offer constructive insights aimed at refining these techniques.Finally,we extend our research perspective to the domain of large language models(LLMs)and synthesize the characteristics and developmental trends of backdoor attacks and defense methods targeting LLMs.Through a systematic review of existing studies on backdoor vulnerabilities in LLMs,we identify critical open challenges in this field and propose actionable directions for future research.展开更多
With the advancement of Vehicle-to-Everything(V2X)technology,efficient resource allocation in dynamic vehicular networks has become a critical challenge for achieving optimal performance.Existing methods suffer from h...With the advancement of Vehicle-to-Everything(V2X)technology,efficient resource allocation in dynamic vehicular networks has become a critical challenge for achieving optimal performance.Existing methods suffer from high computational complexity and decision latency under high-density traffic and heterogeneous network conditions.To address these challenges,this study presents an innovative framework that combines Graph Neural Networks(GNNs)with a Double Deep Q-Network(DDQN),utilizing dynamic graph structures and reinforcement learning.An adaptive neighbor sampling mechanism is introduced to dynamically select the most relevant neighbors based on interference levels and network topology,thereby improving decision accuracy and efficiency.Meanwhile,the framework models communication links as nodes and interference relationships as edges,effectively capturing the direct impact of interference on resource allocation while reducing computational complexity and preserving critical interaction information.Employing an aggregation mechanism based on the Graph Attention Network(GAT),it dynamically adjusts the neighbor sampling scope and performs attention-weighted aggregation based on node importance,ensuring more efficient and adaptive resource management.This design ensures reliable Vehicle-to-Vehicle(V2V)communication while maintaining high Vehicle-to-Infrastructure(V2I)throughput.The framework retains the global feature learning capabilities of GNNs and supports distributed network deployment,allowing vehicles to extract low-dimensional graph embeddings from local observations for real-time resource decisions.Experimental results demonstrate that the proposed method significantly reduces computational overhead,mitigates latency,and improves resource utilization efficiency in vehicular networks under complex traffic scenarios.This research not only provides a novel solution to resource allocation challenges in V2X networks but also advances the application of DDQN in intelligent transportation systems,offering substantial theoretical significance and practical value.展开更多
The categorization of brain tumors is a significant issue for healthcare applications.Perfect and timely identification of brain tumors is important for employing an effective treatment of this disease.Brain tumors po...The categorization of brain tumors is a significant issue for healthcare applications.Perfect and timely identification of brain tumors is important for employing an effective treatment of this disease.Brain tumors possess high changes in terms of size,shape,and amount,and hence the classification process acts as a more difficult research problem.This paper suggests a deep learning model using the magnetic resonance imaging technique that overcomes the limitations associated with the existing classification methods.The effectiveness of the suggested method depends on the coyote optimization algorithm,also known as the LOBO algorithm,which optimizes the weights of the deep-convolutional neural network classifier.The accuracy,sensitivity,and specificity indices,which are obtained to be 92.40%,94.15%,and 91.92%,respectively,are used to validate the effectiveness of the suggested method.The result suggests that the suggested strategy is superior for effectively classifying brain tumors.展开更多
This study introduces a hybrid Cuckoo Search-Deep Neural Network(CS-DNN)model for uncertainty quantification and composition optimization of Na_(1/2)Bi_(1/2)TiO_(3)(NBT)-based dielectric energy storage ceramics.Addres...This study introduces a hybrid Cuckoo Search-Deep Neural Network(CS-DNN)model for uncertainty quantification and composition optimization of Na_(1/2)Bi_(1/2)TiO_(3)(NBT)-based dielectric energy storage ceramics.Addressing the limitations of traditional ferroelectric materials—such as hysteresis loss and low breakdown strength under high electric fields—we fabricate(1−x)NBBT8-xBMT solid solutions via chemical modification and systematically investigate their temperature stability and composition-dependent energy storage performance through XRD,SEM,and electrical characterization.The key innovation lies in integrating the CS metaheuristic algorithm with a DNN,overcoming localminima in training and establishing a robust composition-property prediction framework.Our model accurately predicts room-temperature dielectric constant(ε_(r)),maximum dielectric constant(ε_(max)),dielectric loss(tanδ),discharge energy density(W_(rec)),and charge-discharge efficiency(η)from compositional inputs.A Monte Carlo-based uncertainty quantification framework,combined with the 3σ statistical criterion,demonstrates that CSDNN outperforms conventional DNN models in three critical aspects:Higher prediction accuracy(R^(2)=0.9717 vs.0.9382 for ε_(max));Tighter error distribution,satisfying the 99.7% confidence interval under the 3σprinciple;Enhanced robustness,maintaining stable predictions across a 25% composition span in generalization tests.While the model’s generalization is constrained by both the limited experimental dataset(n=45)and the underlying assumptions of MC-based data augmentation,the CS-DNN framework establishes a machine learning-guided paradigm for accelerated discovery of high-temperature dielectric capacitors through its unique capability in quantifying composition-level energy storage uncertainties.展开更多
Deep neural networks provide accurate results for most applications.However,they need a big dataset to train properly.Providing a big dataset is a significant challenge in most applications.Image augmentation refers t...Deep neural networks provide accurate results for most applications.However,they need a big dataset to train properly.Providing a big dataset is a significant challenge in most applications.Image augmentation refers to techniques that increase the amount of image data.Common operations for image augmentation include changes in illumination,rotation,contrast,size,viewing angle,and others.Recently,Generative Adversarial Networks(GANs)have been employed for image generation.However,like image augmentation methods,GAN approaches can only generate images that are similar to the original images.Therefore,they also cannot generate new classes of data.Texture images presentmore challenges than general images,and generating textures is more complex than creating other types of images.This study proposes a gradient-based deep neural network method that generates a new class of texture.It is possible to rapidly generate new classes of textures using different kernels from pre-trained deep networks.After generating new textures for each class,the number of textures increases through image augmentation.During this process,several techniques are proposed to automatically remove incomplete and similar textures that are created.The proposed method is faster than some well-known generative networks by around 4 to 10 times.In addition,the quality of the generated textures surpasses that of these networks.The proposed method can generate textures that surpass those of someGANs and parametric models in certain image qualitymetrics.It can provide a big texture dataset to train deep networks.A new big texture dataset is created artificially using the proposed method.This dataset is approximately 2 GB in size and comprises 30,000 textures,each 150×150 pixels in size,organized into 600 classes.It is uploaded to the Kaggle site and Google Drive.This dataset is called BigTex.Compared to other texture datasets,the proposed dataset is the largest and can serve as a comprehensive texture dataset for training more powerful deep neural networks and mitigating overfitting.展开更多
Traditional Chinese medicine(TCM),especially the plant-based,represents complex chemical system containing various primary and secondary metabolites.These botanical metabolites are structurally diversified and exhibit...Traditional Chinese medicine(TCM),especially the plant-based,represents complex chemical system containing various primary and secondary metabolites.These botanical metabolites are structurally diversified and exhibit significant difference in the acidity,alkalinity,molecular weight,polarity,and content,etc,which thus poses great challenges in assessing the quality of TCM[1].展开更多
The analysis of Android malware shows that this threat is constantly increasing and is a real threat to mobile devices since traditional approaches,such as signature-based detection,are no longer effective due to the ...The analysis of Android malware shows that this threat is constantly increasing and is a real threat to mobile devices since traditional approaches,such as signature-based detection,are no longer effective due to the continuously advancing level of sophistication.To resolve this problem,efficient and flexible malware detection tools are needed.This work examines the possibility of employing deep CNNs to detect Android malware by transforming network traffic into image data representations.Moreover,the dataset used in this study is the CIC-AndMal2017,which contains 20,000 instances of network traffic across five distinct malware categories:a.Trojan,b.Adware,c.Ransomware,d.Spyware,e.Worm.These network traffic features are then converted to image formats for deep learning,which is applied in a CNN framework,including the VGG16 pre-trained model.In addition,our approach yielded high performance,yielding an accuracy of 0.92,accuracy of 99.1%,precision of 98.2%,recall of 99.5%,and F1 score of 98.7%.Subsequent improvements to the classification model through changes within the VGG19 framework improved the classification rate to 99.25%.Through the results obtained,it is clear that CNNs are a very effective way to classify Android malware,providing greater accuracy than conventional techniques.The success of this approach also shows the applicability of deep learning in mobile security along with the direction for the future advancement of the real-time detection system and other deeper learning techniques to counter the increasing number of threats emerging in the future.展开更多
The amount of oxygen blown into the converter is one of the key parameters for the control of the converter blowing process,which directly affects the tap-to-tap time of converter. In this study, a hybrid model based ...The amount of oxygen blown into the converter is one of the key parameters for the control of the converter blowing process,which directly affects the tap-to-tap time of converter. In this study, a hybrid model based on oxygen balance mechanism (OBM) and deep neural network (DNN) was established for predicting oxygen blowing time in converter. A three-step method was utilized in the hybrid model. First, the oxygen consumption volume was predicted by the OBM model and DNN model, respectively. Second, a more accurate oxygen consumption volume was obtained by integrating the OBM model and DNN model. Finally, the converter oxygen blowing time was calculated according to the oxygen consumption volume and the oxygen supply intensity of each heat. The proposed hybrid model was verified using the actual data collected from an integrated steel plant in China, and compared with multiple linear regression model, OBM model, and neural network model including extreme learning machine, back propagation neural network, and DNN. The test results indicate that the hybrid model with a network structure of 3 hidden layer layers, 32-16-8 neurons per hidden layer, and 0.1 learning rate has the best prediction accuracy and stronger generalization ability compared with other models. The predicted hit ratio of oxygen consumption volume within the error±300 m^(3)is 96.67%;determination coefficient (R^(2)) and root mean square error (RMSE) are0.6984 and 150.03 m^(3), respectively. The oxygen blow time prediction hit ratio within the error±0.6 min is 89.50%;R2and RMSE are0.9486 and 0.3592 min, respectively. As a result, the proposed model can effectively predict the oxygen consumption volume and oxygen blowing time in the converter.展开更多
Identifying cyberattacks that attempt to compromise digital systems is a critical function of intrusion detection systems(IDS).Data labeling difficulties,incorrect conclusions,and vulnerability to malicious data injec...Identifying cyberattacks that attempt to compromise digital systems is a critical function of intrusion detection systems(IDS).Data labeling difficulties,incorrect conclusions,and vulnerability to malicious data injections are only a few drawbacks of using machine learning algorithms for cybersecurity.To overcome these obstacles,researchers have created several network IDS models,such as the Hidden Naive Bayes Multiclass Classifier and supervised/unsupervised machine learning techniques.This study provides an updated learning strategy for artificial neural network(ANN)to address data categorization problems caused by unbalanced data.Compared to traditional approaches,the augmented ANN’s 92%accuracy is a significant improvement owing to the network’s increased resilience to disturbances and computational complexity,brought about by the addition of a random weight and standard scaler.Considering the ever-evolving nature of cybersecurity threats,this study introduces a revolutionary intrusion detection method.展开更多
The ability to accurately predict urban traffic flows is crucial for optimising city operations.Consequently,various methods for forecasting urban traffic have been developed,focusing on analysing historical data to u...The ability to accurately predict urban traffic flows is crucial for optimising city operations.Consequently,various methods for forecasting urban traffic have been developed,focusing on analysing historical data to understand complex mobility patterns.Deep learning techniques,such as graph neural networks(GNNs),are popular for their ability to capture spatio-temporal dependencies.However,these models often become overly complex due to the large number of hyper-parameters involved.In this study,we introduce Dynamic Multi-Graph Spatial-Temporal Graph Neural Ordinary Differential Equation Networks(DMST-GNODE),a framework based on ordinary differential equations(ODEs)that autonomously discovers effective spatial-temporal graph neural network(STGNN)architectures for traffic prediction tasks.The comparative analysis of DMST-GNODE and baseline models indicates that DMST-GNODE model demonstrates superior performance across multiple datasets,consistently achieving the lowest Root Mean Square Error(RMSE)and Mean Absolute Error(MAE)values,alongside the highest accuracy.On the BKK(Bangkok)dataset,it outperformed other models with an RMSE of 3.3165 and an accuracy of 0.9367 for a 20-min interval,maintaining this trend across 40 and 60 min.Similarly,on the PeMS08 dataset,DMST-GNODE achieved the best performance with an RMSE of 19.4863 and an accuracy of 0.9377 at 20 min,demonstrating its effectiveness over longer periods.The Los_Loop dataset results further emphasise this model’s advantage,with an RMSE of 3.3422 and an accuracy of 0.7643 at 20 min,consistently maintaining superiority across all time intervals.These numerical highlights indicate that DMST-GNODE not only outperforms baseline models but also achieves higher accuracy and lower errors across different time intervals and datasets.展开更多
Accurate prediction of molecular properties is crucial for selecting compounds with ideal properties and reducing the costs and risks of trials.Traditional methods based on manually crafted features and graph-based me...Accurate prediction of molecular properties is crucial for selecting compounds with ideal properties and reducing the costs and risks of trials.Traditional methods based on manually crafted features and graph-based methods have shown promising results in molecular property prediction.However,traditional methods rely on expert knowledge and often fail to capture the complex structures and interactions within molecules.Similarly,graph-based methods typically overlook the chemical structure and function hidden in molecular motifs and struggle to effectively integrate global and local molecular information.To address these limitations,we propose a novel fingerprint-enhanced hierarchical graph neural network(FH-GNN)for molecular property prediction that simultaneously learns information from hierarchical molecular graphs and fingerprints.The FH-GNN captures diverse hierarchical chemical information by applying directed message-passing neural networks(D-MPNN)on a hierarchical molecular graph that integrates atomic-level,motif-level,and graph-level information along with their relationships.Addi-tionally,we used an adaptive attention mechanism to balance the importance of hierarchical graphs and fingerprint features,creating a comprehensive molecular embedding that integrated hierarchical mo-lecular structures with domain knowledge.Experiments on eight benchmark datasets from MoleculeNet showed that FH-GNN outperformed the baseline models in both classification and regression tasks for molecular property prediction,validating its capability to comprehensively capture molecular informa-tion.By integrating molecular structure and chemical knowledge,FH-GNN provides a powerful tool for the accurate prediction of molecular properties and aids in the discovery of potential drug candidates.展开更多
The integration of image analysis through deep learning(DL)into rock classification represents a significant leap forward in geological research.While traditional methods remain invaluable for their expertise and hist...The integration of image analysis through deep learning(DL)into rock classification represents a significant leap forward in geological research.While traditional methods remain invaluable for their expertise and historical context,DL offers a powerful complement by enhancing the speed,objectivity,and precision of the classification process.This research explores the significance of image data augmentation techniques in optimizing the performance of convolutional neural networks(CNNs)for geological image analysis,particularly in the classification of igneous,metamorphic,and sedimentary rock types from rock thin section(RTS)images.This study primarily focuses on classic image augmentation techniques and evaluates their impact on model accuracy and precision.Results demonstrate that augmentation techniques like Equalize significantly enhance the model's classification capabilities,achieving an F1-Score of 0.9869 for igneous rocks,0.9884 for metamorphic rocks,and 0.9929 for sedimentary rocks,representing improvements compared to the baseline original results.Moreover,the weighted average F1-Score across all classes and techniques is 0.9886,indicating an enhancement.Conversely,methods like Distort lead to decreased accuracy and F1-Score,with an F1-Score of 0.949 for igneous rocks,0.954 for metamorphic rocks,and 0.9416 for sedimentary rocks,exacerbating the performance compared to the baseline.The study underscores the practicality of image data augmentation in geological image classification and advocates for the adoption of DL methods in this domain for automation and improved results.The findings of this study can benefit various fields,including remote sensing,mineral exploration,and environmental monitoring,by enhancing the accuracy of geological image analysis both for scientific research and industrial applications.展开更多
It is important to calculate the reachable domain(RD)of the manned lunar mission to evaluate whether a lunar landing site could be reached by the spacecraft. In this paper, the RD of free return orbits is quickly eval...It is important to calculate the reachable domain(RD)of the manned lunar mission to evaluate whether a lunar landing site could be reached by the spacecraft. In this paper, the RD of free return orbits is quickly evaluated and calculated via the classification and regression neural networks. An efficient databasegeneration method is developed for obtaining eight types of free return orbits and then the RD is defined by the orbit’s inclination and right ascension of ascending node(RAAN) at the perilune. A classify neural network and a regression network are trained respectively. The former is built for classifying the type of the RD, and the latter is built for calculating the inclination and RAAN of the RD. The simulation results show that two neural networks are well trained. The classification model has an accuracy of more than 99% and the mean square error of the regression model is less than 0.01°on the test set. Moreover, a serial strategy is proposed to combine the two surrogate models and a recognition tool is built to evaluate whether a lunar site could be reached. The proposed deep learning method shows the superiority in computation efficiency compared with the traditional double two-body model.展开更多
文摘Deep neural networks(DNNs)are effective in solving both forward and inverse problems for nonlinear partial differential equations(PDEs).However,conventional DNNs are not effective in handling problems such as delay differential equations(DDEs)and delay integrodifferential equations(DIDEs)with constant delays,primarily due to their low regularity at delayinduced breaking points.In this paper,a DNN method that combines multi-task learning(MTL)which is proposed to solve both the forward and inverse problems of DIDEs.The core idea of this approach is to divide the original equation into multiple tasks based on the delay,using auxiliary outputs to represent the integral terms,followed by the use of MTL to seamlessly incorporate the properties at the breaking points into the loss function.Furthermore,given the increased training dificulty associated with multiple tasks and outputs,we employ a sequential training scheme to reduce training complexity and provide reference solutions for subsequent tasks.This approach significantly enhances the approximation accuracy of solving DIDEs with DNNs,as demonstrated by comparisons with traditional DNN methods.We validate the effectiveness of this method through several numerical experiments,test various parameter sharing structures in MTL and compare the testing results of these structures.Finally,this method is implemented to solve the inverse problem of nonlinear DIDE and the results show that the unknown parameters of DIDE can be discovered with sparse or noisy data.
基金supported by the National Key R&D Program of China(Grant No.:2022YFC3501805)the National Natural Science Foundation of China(Grant No.:82374030)+2 种基金the Science and Technology Program of Tianjin in China(Grant No.:23ZYJDSS00030)the Tianjin Outstanding Youth Fund,China(Grant No.:23JCJQJC00030)the China Postdoctoral Science Foundation-Tianjin Joint Support Program(Grant No.:2023T030TJ).
文摘Metabolomics covers a wide range of applications in life sciences,biomedicine,and phytology.Data acquisition(to achieve high coverage and efficiency)and analysis(to pursue good classification)are two key segments involved in metabolomics workflows.Various chemometric approaches utilizing either pattern recognition or machine learning have been employed to separate different groups.However,insufficient feature extraction,inappropriate feature selection,overfitting,or underfitting lead to an insufficient capacity to discriminate plants that are often easily confused.Using two ginseng varieties,namely Panax japonicus(PJ)and Panax japonicus var.major(PJvm),containing the similar ginsenosides,we integrated pseudo-targeted metabolomics and deep neural network(DNN)modeling to achieve accurate species differentiation.A pseudo-targeted metabolomics approach was optimized through data acquisition mode,ion pairs generation,comparison between multiple reaction monitoring(MRM)and scheduled MRM(sMRM),and chromatographic elution gradient.In total,1980 ion pairs were monitored within 23 min,allowing for the most comprehensive ginseng metabolome analysis.The established DNN model demonstrated excellent classification performance(in terms of accuracy,precision,recall,F1 score,area under the curve,and receiver operating characteristic(ROC))using the entire metabolome data and feature-selection dataset,exhibiting superior advantages over random forest(RF),support vector machine(SVM),extreme gradient boosting(XGBoost),and multilayer perceptron(MLP).Moreover,DNNs were advantageous for automated feature learning,nonlinear modeling,adaptability,and generalization.This study confirmed practicality of the established strategy for efficient metabolomics data analysis and reliable classification performance even when using small-volume samples.This established approach holds promise for plant metabolomics and is not limited to ginseng.
基金supported by the Intelligent System Research Group(ISysRG)supported by Universitas Sriwijaya funded by the Competitive Research 2024.
文摘Handling missing data accurately is critical in clinical research, where data quality directly impacts decision-making and patient outcomes. While deep learning (DL) techniques for data imputation have gained attention, challenges remain, especially when dealing with diverse data types. In this study, we introduce a novel data imputation method based on a modified convolutional neural network, specifically, a Deep Residual-Convolutional Neural Network (DRes-CNN) architecture designed to handle missing values across various datasets. Our approach demonstrates substantial improvements over existing imputation techniques by leveraging residual connections and optimized convolutional layers to capture complex data patterns. We evaluated the model on publicly available datasets, including Medical Information Mart for Intensive Care (MIMIC-III and MIMIC-IV), which contain critical care patient data, and the Beijing Multi-Site Air Quality dataset, which measures environmental air quality. The proposed DRes-CNN method achieved a root mean square error (RMSE) of 0.00006, highlighting its high accuracy and robustness. We also compared with Low Light-Convolutional Neural Network (LL-CNN) and U-Net methods, which had RMSE values of 0.00075 and 0.00073, respectively. This represented an improvement of approximately 92% over LL-CNN and 91% over U-Net. The results showed that this DRes-CNN-based imputation method outperforms current state-of-the-art models. These results established DRes-CNN as a reliable solution for addressing missing data.
基金University of Jeddah,Jeddah,Saudi Arabia,grant No.(UJ-23-SRP-10).
文摘Electric vehicles(EVs)are gradually being deployed in the transportation sector.Although they have a high impact on reducing greenhouse gas emissions,their penetration is challenged by their random energy demand and difficult scheduling of their optimal charging.To cope with these problems,this paper presents a novel approach for photovoltaic grid-connected microgrid EV charging station energy demand forecasting.The present study is part of a comprehensive framework involving emerging technologies such as drones and artificial intelligence designed to support the EVs’charging scheduling task.By using predictive algorithms for solar generation and load demand estimation,this approach aimed at ensuring dynamic and efficient energy flow between the solar energy source,the grid and the electric vehicles.The main contribution of this paper lies in developing an intelligent approach based on deep recurrent neural networks to forecast the energy demand using only its previous records.Therefore,various forecasters based on Long Short-term Memory,Gated Recurrent Unit,and their bi-directional and stacked variants were investigated using a real dataset collected from an EV charging station located at Trieste University(Italy).The developed forecasters have been evaluated and compared according to different metrics,including R,RMSE,MAE,and MAPE.We found that the obtained R values for both PV power generation and energy demand ranged between 97%and 98%.These study findings can be used for reliable and efficient decision-making on the management side of the optimal scheduling of the charging operations.
文摘With the increasing importance of supply chain transparency,blockchain-based data has emerged as a valuable and verifiable source for analyzing procurement transaction risks.This study extends the mathematical model and proof of‘the Overall Performance Characteristics of the Supply Chain’to encompass multiple variables within blockchain data.Utilizing graph theory,the model is further developed into a single-layer neural network,which serves as the foundation for constructing two multi-layer deep learning neural network models,Feedforward Neural Network(abbreviated as FNN)and Deep Clustering Network(abbreviated as DCN).Furthermore,this study retrieves corporate data from the Chunghwa Yellow Pages online resource and Taiwan Economic Journal database(abbreviated as TEJ).These data are then virtualized using‘the Metaverse Algorithm’,and the selected virtualized blockchain variables are utilized to train a neural network model for classification.The results demonstrate that a single-layer neural network model,leveraging blockchain data and employing the Proof of Relation algorithm(abbreviated as PoR)as the activation function,effectively identifies anomalous enterprises,which constitute 7.2%of the total sample,aligning with expectations.In contrast,the multi-layer neural network models,DCN and FNN,classify an excessively large proportion of enterprises as anomalous(ranging from one-fourth to one-third),which deviates from expectations.This indicates that deep learning may still be inadequate in effectively capturing or identifying malicious corporate behaviors associated with distortions in procurement transaction data.In other words,procurement transaction blockchain data possesses intrinsic value that cannot be replaced by artificial intelligence(abbreviated as AI).
基金supported by the Basic Science Research Program through the National Research Foundation of Korea(NRF)funded by the Ministry of Education(No.2021R1F1A1055408)supported by the Researchers Supporting Project Number(MHIRSP2024005)Almaarefa University,Riyadh,Saudi Arabia.
文摘Fundoscopic diagnosis involves assessing the proper functioning of the eye’s nerves,blood vessels,retinal health,and the impact of diabetes on the optic nerves.Fundus disorders are a major global health concern,affecting millions of people worldwide due to their widespread occurrence.Fundus photography generates machine-based eye images that assist in diagnosing and treating ocular diseases such as diabetic retinopathy.As a result,accurate fundus detection is essential for early diagnosis and effective treatment,helping to prevent severe complications and improve patient outcomes.To address this need,this article introduces a Derivative Model for Fundus Detection using Deep NeuralNetworks(DMFD-DNN)to enhance diagnostic precision.Thismethod selects key features for fundus detection using the least derivative,which identifies features correlating with stored fundus images.Feature filtering relies on the minimum derivative,determined by extracting both similar and varying textures.In this research,the DNN model was integrated with the derivative model.Fundus images were segmented,features were extracted,and the DNN was iteratively trained to identify fundus regions reliably.The goal was to improve the precision of fundoscopic diagnosis by training the DNN incrementally,taking into account the least possible derivative across iterations,and using outputs from previous cycles.The hidden layer of the neural network operates on the most significant derivative,which may reduce precision across iterations.These derivatives are treated as inaccurate,and the model is subsequently trained using selective features and their corresponding extractions.The proposed model outperforms previous techniques in detecting fundus regions,achieving 94.98%accuracy and 91.57%sensitivity,with a minimal error rate of 5.43%.It significantly reduces feature extraction time to 1.462 s and minimizes computational overhead,thereby improving operational efficiency and scalability.Ultimately,the proposed model enhances diagnostic precision and reduces errors,leading to more effective fundus dysfunction diagnosis and treatment.
基金supported by the National Natural Science Foundation of China(No.62134004).
文摘To enhance the denoising performance of event-based sensors,we introduce a clustering-based temporal deep neural network denoising method(CBTDNN).Firstly,to cluster the sensor output data and obtain the respective cluster centers,a combination of density-based spatial clustering of applications with noise(DBSCAN)and Kmeans++is utilized.Subsequently,long short-term memory(LSTM)is employed to fit and yield optimized cluster centers with temporal information.Lastly,based on the new cluster centers and denoising ratio,a radius threshold is set,and noise points beyond this threshold are removed.The comprehensive denoising metrics F1_score of CBTDNN have achieved 0.8931,0.7735,and 0.9215 on the traffic sequences dataset,pedestrian detection dataset,and turntable dataset,respectively.And these metrics demonstrate improvements of 49.90%,33.07%,19.31%,and 22.97%compared to four contrastive algorithms,namely nearest neighbor(NNb),nearest neighbor with polarity(NNp),Autoencoder,and multilayer perceptron denoising filter(MLPF).These results demonstrate that the proposed method enhances the denoising performance of event-based sensors.
基金supported in part by the National Natural Science Foundation of China under Grants No.62372087 and No.62072076the Research Fund of State Key Laboratory of Processors under Grant No.CLQ202310the CSC scholarship.
文摘Deep neural networks(DNNs)have found extensive applications in safety-critical artificial intelligence systems,such as autonomous driving and facial recognition systems.However,recent research has revealed their susceptibility to backdoors maliciously injected by adversaries.This vulnerability arises due to the intricate architecture and opacity of DNNs,resulting in numerous redundant neurons embedded within the models.Adversaries exploit these vulnerabilities to conceal malicious backdoor information within DNNs,thereby causing erroneous outputs and posing substantial threats to the efficacy of DNN-based applications.This article presents a comprehensive survey of backdoor attacks against DNNs and the countermeasure methods employed to mitigate them.Initially,we trace the evolution of the concept from traditional backdoor attacks to backdoor attacks against DNNs,highlighting the feasibility and practicality of generating backdoor attacks against DNNs.Subsequently,we provide an overview of notable works encompassing various attack and defense strategies,facilitating a comparative analysis of their approaches.Through these discussions,we offer constructive insights aimed at refining these techniques.Finally,we extend our research perspective to the domain of large language models(LLMs)and synthesize the characteristics and developmental trends of backdoor attacks and defense methods targeting LLMs.Through a systematic review of existing studies on backdoor vulnerabilities in LLMs,we identify critical open challenges in this field and propose actionable directions for future research.
基金Project ZR2023MF111 supported by Shandong Provincial Natural Science Foundation。
文摘With the advancement of Vehicle-to-Everything(V2X)technology,efficient resource allocation in dynamic vehicular networks has become a critical challenge for achieving optimal performance.Existing methods suffer from high computational complexity and decision latency under high-density traffic and heterogeneous network conditions.To address these challenges,this study presents an innovative framework that combines Graph Neural Networks(GNNs)with a Double Deep Q-Network(DDQN),utilizing dynamic graph structures and reinforcement learning.An adaptive neighbor sampling mechanism is introduced to dynamically select the most relevant neighbors based on interference levels and network topology,thereby improving decision accuracy and efficiency.Meanwhile,the framework models communication links as nodes and interference relationships as edges,effectively capturing the direct impact of interference on resource allocation while reducing computational complexity and preserving critical interaction information.Employing an aggregation mechanism based on the Graph Attention Network(GAT),it dynamically adjusts the neighbor sampling scope and performs attention-weighted aggregation based on node importance,ensuring more efficient and adaptive resource management.This design ensures reliable Vehicle-to-Vehicle(V2V)communication while maintaining high Vehicle-to-Infrastructure(V2I)throughput.The framework retains the global feature learning capabilities of GNNs and supports distributed network deployment,allowing vehicles to extract low-dimensional graph embeddings from local observations for real-time resource decisions.Experimental results demonstrate that the proposed method significantly reduces computational overhead,mitigates latency,and improves resource utilization efficiency in vehicular networks under complex traffic scenarios.This research not only provides a novel solution to resource allocation challenges in V2X networks but also advances the application of DDQN in intelligent transportation systems,offering substantial theoretical significance and practical value.
文摘The categorization of brain tumors is a significant issue for healthcare applications.Perfect and timely identification of brain tumors is important for employing an effective treatment of this disease.Brain tumors possess high changes in terms of size,shape,and amount,and hence the classification process acts as a more difficult research problem.This paper suggests a deep learning model using the magnetic resonance imaging technique that overcomes the limitations associated with the existing classification methods.The effectiveness of the suggested method depends on the coyote optimization algorithm,also known as the LOBO algorithm,which optimizes the weights of the deep-convolutional neural network classifier.The accuracy,sensitivity,and specificity indices,which are obtained to be 92.40%,94.15%,and 91.92%,respectively,are used to validate the effectiveness of the suggested method.The result suggests that the suggested strategy is superior for effectively classifying brain tumors.
基金supported by the Postgraduate Education Reform and Quality Improvement Project of Henan Province(Grant Nos.YJS2023JD52 and YJS2025GZZ48)the Zhumadian 2023 Major Science and Technology Special Project(Grant No.ZMD SZDZX2023002)+1 种基金2025 Henan Province International Science and Technology Cooperation Project(Cultivation Project,No.252102521011)Research Merit-Based Funding Program for Overseas Educated Personnel in Henan Province(Letter of Henan Human Resources and Social Security Office[2025]No.37).
文摘This study introduces a hybrid Cuckoo Search-Deep Neural Network(CS-DNN)model for uncertainty quantification and composition optimization of Na_(1/2)Bi_(1/2)TiO_(3)(NBT)-based dielectric energy storage ceramics.Addressing the limitations of traditional ferroelectric materials—such as hysteresis loss and low breakdown strength under high electric fields—we fabricate(1−x)NBBT8-xBMT solid solutions via chemical modification and systematically investigate their temperature stability and composition-dependent energy storage performance through XRD,SEM,and electrical characterization.The key innovation lies in integrating the CS metaheuristic algorithm with a DNN,overcoming localminima in training and establishing a robust composition-property prediction framework.Our model accurately predicts room-temperature dielectric constant(ε_(r)),maximum dielectric constant(ε_(max)),dielectric loss(tanδ),discharge energy density(W_(rec)),and charge-discharge efficiency(η)from compositional inputs.A Monte Carlo-based uncertainty quantification framework,combined with the 3σ statistical criterion,demonstrates that CSDNN outperforms conventional DNN models in three critical aspects:Higher prediction accuracy(R^(2)=0.9717 vs.0.9382 for ε_(max));Tighter error distribution,satisfying the 99.7% confidence interval under the 3σprinciple;Enhanced robustness,maintaining stable predictions across a 25% composition span in generalization tests.While the model’s generalization is constrained by both the limited experimental dataset(n=45)and the underlying assumptions of MC-based data augmentation,the CS-DNN framework establishes a machine learning-guided paradigm for accelerated discovery of high-temperature dielectric capacitors through its unique capability in quantifying composition-level energy storage uncertainties.
基金supported via funding from Prince Sattam bin Abdulaziz University(PSAU/2025/R/1446)Princess Nourah bint Abdulrahman University(PNURSP2025R300)Prince Sultan University.
文摘Deep neural networks provide accurate results for most applications.However,they need a big dataset to train properly.Providing a big dataset is a significant challenge in most applications.Image augmentation refers to techniques that increase the amount of image data.Common operations for image augmentation include changes in illumination,rotation,contrast,size,viewing angle,and others.Recently,Generative Adversarial Networks(GANs)have been employed for image generation.However,like image augmentation methods,GAN approaches can only generate images that are similar to the original images.Therefore,they also cannot generate new classes of data.Texture images presentmore challenges than general images,and generating textures is more complex than creating other types of images.This study proposes a gradient-based deep neural network method that generates a new class of texture.It is possible to rapidly generate new classes of textures using different kernels from pre-trained deep networks.After generating new textures for each class,the number of textures increases through image augmentation.During this process,several techniques are proposed to automatically remove incomplete and similar textures that are created.The proposed method is faster than some well-known generative networks by around 4 to 10 times.In addition,the quality of the generated textures surpasses that of these networks.The proposed method can generate textures that surpass those of someGANs and parametric models in certain image qualitymetrics.It can provide a big texture dataset to train deep networks.A new big texture dataset is created artificially using the proposed method.This dataset is approximately 2 GB in size and comprises 30,000 textures,each 150×150 pixels in size,organized into 600 classes.It is uploaded to the Kaggle site and Google Drive.This dataset is called BigTex.Compared to other texture datasets,the proposed dataset is the largest and can serve as a comprehensive texture dataset for training more powerful deep neural networks and mitigating overfitting.
文摘Traditional Chinese medicine(TCM),especially the plant-based,represents complex chemical system containing various primary and secondary metabolites.These botanical metabolites are structurally diversified and exhibit significant difference in the acidity,alkalinity,molecular weight,polarity,and content,etc,which thus poses great challenges in assessing the quality of TCM[1].
基金funded by the Deanship of Scientific Research at Princess Nourah bint Abdulrahman University,through the Research Funding Program,Grant No.(FRP-1443-15).
文摘The analysis of Android malware shows that this threat is constantly increasing and is a real threat to mobile devices since traditional approaches,such as signature-based detection,are no longer effective due to the continuously advancing level of sophistication.To resolve this problem,efficient and flexible malware detection tools are needed.This work examines the possibility of employing deep CNNs to detect Android malware by transforming network traffic into image data representations.Moreover,the dataset used in this study is the CIC-AndMal2017,which contains 20,000 instances of network traffic across five distinct malware categories:a.Trojan,b.Adware,c.Ransomware,d.Spyware,e.Worm.These network traffic features are then converted to image formats for deep learning,which is applied in a CNN framework,including the VGG16 pre-trained model.In addition,our approach yielded high performance,yielding an accuracy of 0.92,accuracy of 99.1%,precision of 98.2%,recall of 99.5%,and F1 score of 98.7%.Subsequent improvements to the classification model through changes within the VGG19 framework improved the classification rate to 99.25%.Through the results obtained,it is clear that CNNs are a very effective way to classify Android malware,providing greater accuracy than conventional techniques.The success of this approach also shows the applicability of deep learning in mobile security along with the direction for the future advancement of the real-time detection system and other deeper learning techniques to counter the increasing number of threats emerging in the future.
基金financially supported by the National Natural Science Foundation of China (Nos.51974023 and52374321)the funding of State Key Laboratory of Advanced Metallurgy,University of Science and Technology Beijing,China (No.41620007)。
文摘The amount of oxygen blown into the converter is one of the key parameters for the control of the converter blowing process,which directly affects the tap-to-tap time of converter. In this study, a hybrid model based on oxygen balance mechanism (OBM) and deep neural network (DNN) was established for predicting oxygen blowing time in converter. A three-step method was utilized in the hybrid model. First, the oxygen consumption volume was predicted by the OBM model and DNN model, respectively. Second, a more accurate oxygen consumption volume was obtained by integrating the OBM model and DNN model. Finally, the converter oxygen blowing time was calculated according to the oxygen consumption volume and the oxygen supply intensity of each heat. The proposed hybrid model was verified using the actual data collected from an integrated steel plant in China, and compared with multiple linear regression model, OBM model, and neural network model including extreme learning machine, back propagation neural network, and DNN. The test results indicate that the hybrid model with a network structure of 3 hidden layer layers, 32-16-8 neurons per hidden layer, and 0.1 learning rate has the best prediction accuracy and stronger generalization ability compared with other models. The predicted hit ratio of oxygen consumption volume within the error±300 m^(3)is 96.67%;determination coefficient (R^(2)) and root mean square error (RMSE) are0.6984 and 150.03 m^(3), respectively. The oxygen blow time prediction hit ratio within the error±0.6 min is 89.50%;R2and RMSE are0.9486 and 0.3592 min, respectively. As a result, the proposed model can effectively predict the oxygen consumption volume and oxygen blowing time in the converter.
文摘Identifying cyberattacks that attempt to compromise digital systems is a critical function of intrusion detection systems(IDS).Data labeling difficulties,incorrect conclusions,and vulnerability to malicious data injections are only a few drawbacks of using machine learning algorithms for cybersecurity.To overcome these obstacles,researchers have created several network IDS models,such as the Hidden Naive Bayes Multiclass Classifier and supervised/unsupervised machine learning techniques.This study provides an updated learning strategy for artificial neural network(ANN)to address data categorization problems caused by unbalanced data.Compared to traditional approaches,the augmented ANN’s 92%accuracy is a significant improvement owing to the network’s increased resilience to disturbances and computational complexity,brought about by the addition of a random weight and standard scaler.Considering the ever-evolving nature of cybersecurity threats,this study introduces a revolutionary intrusion detection method.
文摘The ability to accurately predict urban traffic flows is crucial for optimising city operations.Consequently,various methods for forecasting urban traffic have been developed,focusing on analysing historical data to understand complex mobility patterns.Deep learning techniques,such as graph neural networks(GNNs),are popular for their ability to capture spatio-temporal dependencies.However,these models often become overly complex due to the large number of hyper-parameters involved.In this study,we introduce Dynamic Multi-Graph Spatial-Temporal Graph Neural Ordinary Differential Equation Networks(DMST-GNODE),a framework based on ordinary differential equations(ODEs)that autonomously discovers effective spatial-temporal graph neural network(STGNN)architectures for traffic prediction tasks.The comparative analysis of DMST-GNODE and baseline models indicates that DMST-GNODE model demonstrates superior performance across multiple datasets,consistently achieving the lowest Root Mean Square Error(RMSE)and Mean Absolute Error(MAE)values,alongside the highest accuracy.On the BKK(Bangkok)dataset,it outperformed other models with an RMSE of 3.3165 and an accuracy of 0.9367 for a 20-min interval,maintaining this trend across 40 and 60 min.Similarly,on the PeMS08 dataset,DMST-GNODE achieved the best performance with an RMSE of 19.4863 and an accuracy of 0.9377 at 20 min,demonstrating its effectiveness over longer periods.The Los_Loop dataset results further emphasise this model’s advantage,with an RMSE of 3.3422 and an accuracy of 0.7643 at 20 min,consistently maintaining superiority across all time intervals.These numerical highlights indicate that DMST-GNODE not only outperforms baseline models but also achieves higher accuracy and lower errors across different time intervals and datasets.
基金supported by Macao Science and Technology Development Fund,Macao SAR,China(Grant No.:0043/2023/AFJ)the National Natural Science Foundation of China(Grant No.:22173038)Macao Polytechnic University,Macao SAR,China(Grant No.:RP/FCA-01/2022).
文摘Accurate prediction of molecular properties is crucial for selecting compounds with ideal properties and reducing the costs and risks of trials.Traditional methods based on manually crafted features and graph-based methods have shown promising results in molecular property prediction.However,traditional methods rely on expert knowledge and often fail to capture the complex structures and interactions within molecules.Similarly,graph-based methods typically overlook the chemical structure and function hidden in molecular motifs and struggle to effectively integrate global and local molecular information.To address these limitations,we propose a novel fingerprint-enhanced hierarchical graph neural network(FH-GNN)for molecular property prediction that simultaneously learns information from hierarchical molecular graphs and fingerprints.The FH-GNN captures diverse hierarchical chemical information by applying directed message-passing neural networks(D-MPNN)on a hierarchical molecular graph that integrates atomic-level,motif-level,and graph-level information along with their relationships.Addi-tionally,we used an adaptive attention mechanism to balance the importance of hierarchical graphs and fingerprint features,creating a comprehensive molecular embedding that integrated hierarchical mo-lecular structures with domain knowledge.Experiments on eight benchmark datasets from MoleculeNet showed that FH-GNN outperformed the baseline models in both classification and regression tasks for molecular property prediction,validating its capability to comprehensively capture molecular informa-tion.By integrating molecular structure and chemical knowledge,FH-GNN provides a powerful tool for the accurate prediction of molecular properties and aids in the discovery of potential drug candidates.
文摘The integration of image analysis through deep learning(DL)into rock classification represents a significant leap forward in geological research.While traditional methods remain invaluable for their expertise and historical context,DL offers a powerful complement by enhancing the speed,objectivity,and precision of the classification process.This research explores the significance of image data augmentation techniques in optimizing the performance of convolutional neural networks(CNNs)for geological image analysis,particularly in the classification of igneous,metamorphic,and sedimentary rock types from rock thin section(RTS)images.This study primarily focuses on classic image augmentation techniques and evaluates their impact on model accuracy and precision.Results demonstrate that augmentation techniques like Equalize significantly enhance the model's classification capabilities,achieving an F1-Score of 0.9869 for igneous rocks,0.9884 for metamorphic rocks,and 0.9929 for sedimentary rocks,representing improvements compared to the baseline original results.Moreover,the weighted average F1-Score across all classes and techniques is 0.9886,indicating an enhancement.Conversely,methods like Distort lead to decreased accuracy and F1-Score,with an F1-Score of 0.949 for igneous rocks,0.954 for metamorphic rocks,and 0.9416 for sedimentary rocks,exacerbating the performance compared to the baseline.The study underscores the practicality of image data augmentation in geological image classification and advocates for the adoption of DL methods in this domain for automation and improved results.The findings of this study can benefit various fields,including remote sensing,mineral exploration,and environmental monitoring,by enhancing the accuracy of geological image analysis both for scientific research and industrial applications.
基金supported by the National Natural Science Foundation of China (12072365)the Natural Science Foundation of Hunan Province of China (2020JJ4657)。
文摘It is important to calculate the reachable domain(RD)of the manned lunar mission to evaluate whether a lunar landing site could be reached by the spacecraft. In this paper, the RD of free return orbits is quickly evaluated and calculated via the classification and regression neural networks. An efficient databasegeneration method is developed for obtaining eight types of free return orbits and then the RD is defined by the orbit’s inclination and right ascension of ascending node(RAAN) at the perilune. A classify neural network and a regression network are trained respectively. The former is built for classifying the type of the RD, and the latter is built for calculating the inclination and RAAN of the RD. The simulation results show that two neural networks are well trained. The classification model has an accuracy of more than 99% and the mean square error of the regression model is less than 0.01°on the test set. Moreover, a serial strategy is proposed to combine the two surrogate models and a recognition tool is built to evaluate whether a lunar site could be reached. The proposed deep learning method shows the superiority in computation efficiency compared with the traditional double two-body model.