The ability to accurately predict urban traffic flows is crucial for optimising city operations.Consequently,various methods for forecasting urban traffic have been developed,focusing on analysing historical data to u...The ability to accurately predict urban traffic flows is crucial for optimising city operations.Consequently,various methods for forecasting urban traffic have been developed,focusing on analysing historical data to understand complex mobility patterns.Deep learning techniques,such as graph neural networks(GNNs),are popular for their ability to capture spatio-temporal dependencies.However,these models often become overly complex due to the large number of hyper-parameters involved.In this study,we introduce Dynamic Multi-Graph Spatial-Temporal Graph Neural Ordinary Differential Equation Networks(DMST-GNODE),a framework based on ordinary differential equations(ODEs)that autonomously discovers effective spatial-temporal graph neural network(STGNN)architectures for traffic prediction tasks.The comparative analysis of DMST-GNODE and baseline models indicates that DMST-GNODE model demonstrates superior performance across multiple datasets,consistently achieving the lowest Root Mean Square Error(RMSE)and Mean Absolute Error(MAE)values,alongside the highest accuracy.On the BKK(Bangkok)dataset,it outperformed other models with an RMSE of 3.3165 and an accuracy of 0.9367 for a 20-min interval,maintaining this trend across 40 and 60 min.Similarly,on the PeMS08 dataset,DMST-GNODE achieved the best performance with an RMSE of 19.4863 and an accuracy of 0.9377 at 20 min,demonstrating its effectiveness over longer periods.The Los_Loop dataset results further emphasise this model’s advantage,with an RMSE of 3.3422 and an accuracy of 0.7643 at 20 min,consistently maintaining superiority across all time intervals.These numerical highlights indicate that DMST-GNODE not only outperforms baseline models but also achieves higher accuracy and lower errors across different time intervals and datasets.展开更多
Supervised fault diagnosis typically assumes that all the types of machinery failures are known.However,in practice unknown types of defect,i.e.,novelties,may occur,whose detection is a challenging task.In this paper,...Supervised fault diagnosis typically assumes that all the types of machinery failures are known.However,in practice unknown types of defect,i.e.,novelties,may occur,whose detection is a challenging task.In this paper,a novel fault diagnostic method is developed for both diagnostics and detection of novelties.To this end,a sparse autoencoder-based multi-head Deep Neural Network(DNN)is presented to jointly learn a shared encoding representation for both unsupervised reconstruction and supervised classification of the monitoring data.The detection of novelties is based on the reconstruction error.Moreover,the computational burden is reduced by directly training the multi-head DNN with rectified linear unit activation function,instead of performing the pre-training and fine-tuning phases required for classical DNNs.The addressed method is applied to a benchmark bearing case study and to experimental data acquired from a delta 3D printer.The results show that its performance is satisfactory both in detection of novelties and fault diagnosis,outperforming other state-of-the-art methods.This research proposes a novel fault diagnostics method which can not only diagnose the known type of defect,but also detect unknown types of defects.展开更多
This paper presents an innovative data-integration that uses an iterative-learning method,a deep neural network(DNN)coupled with a stacked autoencoder(SAE)to solve issues encountered with many-objective history matchi...This paper presents an innovative data-integration that uses an iterative-learning method,a deep neural network(DNN)coupled with a stacked autoencoder(SAE)to solve issues encountered with many-objective history matching.The proposed method consists of a DNN-based inverse model with SAE-encoded static data and iterative updates of supervised-learning data are based on distance-based clustering schemes.DNN functions as an inverse model and results in encoded flattened data,while SAE,as a pre-trained neural network,successfully reduces dimensionality and reliably reconstructs geomodels.The iterative-learning method can improve the training data for DNN by showing the error reduction achieved with each iteration step.The proposed workflow shows the small mean absolute percentage error below 4%for all objective functions,while a typical multi-objective evolutionary algorithm fails to significantly reduce the initial population uncertainty.Iterative learning-based manyobjective history matching estimates the trends in water cuts that are not reliably included in dynamicdata matching.This confirms the proposed workflow constructs more plausible geo-models.The workflow would be a reliable alternative to overcome the less-convergent Pareto-based multi-objective evolutionary algorithm in the presence of geological uncertainty and varying objective functions.展开更多
Spatio-temporal cellular network traffic prediction at wide-area level plays an important role in resource reconfiguration,traffic scheduling and intrusion detection,thus potentially supporting connected intelligence ...Spatio-temporal cellular network traffic prediction at wide-area level plays an important role in resource reconfiguration,traffic scheduling and intrusion detection,thus potentially supporting connected intelligence of the sixth generation of mobile communications technology(6G).However,the existing studies just focus on the spatio-temporal modeling of traffic data of single network service,such as short message,call,or Internet.It is not conducive to accurate prediction of traffic data,characterised by diverse network service,spatio-temporality and supersize volume.To address this issue,a novel multi-task deep learning framework is developed for citywide cellular network traffic prediction.Functionally,this framework mainly consists of a dual modular feature sharing layer and a multi-task learning layer(DMFS-MT).The former aims at mining long-term spatio-temporal dependencies and local spatio-temporal fluctuation trends in data,respectively,via a new combination of convolutional gated recurrent unit(ConvGRU)and 3-dimensional convolutional neural network(3D-CNN).For the latter,each task is performed for predicting service-specific traffic data based on a fully connected network.On the real-world Telecom Italia dataset,simulation results demonstrate the effectiveness of our proposal through prediction performance measure,spatial pattern comparison and statistical distribution verification.展开更多
Network embedding(NE)tries to learn the potential properties of complex networks represented in a low-dimensional feature space.However,the existing deep learningbased NE methods are time-consuming as they need to tra...Network embedding(NE)tries to learn the potential properties of complex networks represented in a low-dimensional feature space.However,the existing deep learningbased NE methods are time-consuming as they need to train a dense architecture for deep neural networks with extensive unknown weight parameters.A sparse deep autoencoder(called SPDNE)for dynamic NE is proposed,aiming to learn the network structures while preserving the node evolution with a low computational complexity.SPDNE tries to use an optimal sparse architecture to replace the fully connected architecture in the deep autoencoder while maintaining the performance of these models in the dynamic NE.Then,an adaptive simulated algorithm to find the optimal sparse architecture for the deep autoencoder is proposed.The performance of SPDNE over three dynamical NE models(i.e.sparse architecture-based deep autoencoder method,DynGEM,and ElvDNE)is evaluated on three well-known benchmark networks and five real-world networks.The experimental results demonstrate that SPDNE can reduce about 70%of weight parameters of the architecture for the deep autoencoder during the training process while preserving the performance of these dynamical NE models.The results also show that SPDNE achieves the highest accuracy on 72 out of 96 edge prediction and network reconstruction tasks compared with the state-of-the-art dynamical NE algorithms.展开更多
In the era of Big data,learning discriminant feature representation from network traffic is identified has as an invariably essential task for improving the detection ability of an intrusion detection system(IDS).Owin...In the era of Big data,learning discriminant feature representation from network traffic is identified has as an invariably essential task for improving the detection ability of an intrusion detection system(IDS).Owing to the lack of accurately labeled network traffic data,many unsupervised feature representation learning models have been proposed with state-of-theart performance.Yet,these models fail to consider the classification error while learning the feature representation.Intuitively,the learnt feature representation may degrade the performance of the classification task.For the first time in the field of intrusion detection,this paper proposes an unsupervised IDS model leveraging the benefits of deep autoencoder(DAE)for learning the robust feature representation and one-class support vector machine(OCSVM)for finding the more compact decision hyperplane for intrusion detection.Specially,the proposed model defines a new unified objective function to minimize the reconstruction and classification error simultaneously.This unique contribution not only enables the model to support joint learning for feature representation and classifier training but also guides to learn the robust feature representation which can improve the discrimination ability of the classifier for intrusion detection.Three set of evaluation experiments are conducted to demonstrate the potential of the proposed model.First,the ablation evaluation on benchmark dataset,NSL-KDD validates the design decision of the proposed model.Next,the performance evaluation on recent intrusion dataset,UNSW-NB15 signifies the stable performance of the proposed model.Finally,the comparative evaluation verifies the efficacy of the proposed model against recently published state-of-the-art methods.展开更多
Wind and solar energy are two popular forms of renewable energy used in microgrids and facilitating the transition towards net-zero carbon emissions by 2050.However,they are exceedingly unpredictable since they rely h...Wind and solar energy are two popular forms of renewable energy used in microgrids and facilitating the transition towards net-zero carbon emissions by 2050.However,they are exceedingly unpredictable since they rely highly on weather and atmospheric conditions.In microgrids,smart energy management systems,such as integrated demand response programs,are permanently established on a step-ahead basis,which means that accu-rate forecasting of wind speed and solar irradiance intervals is becoming increasingly crucial to the optimal operation and planning of microgrids.With this in mind,a novel“bidirectional long short-term memory network”(Bi-LSTM)-based,deep stacked,sequence-to-sequence autoencoder(S2SAE)forecasting model for predicting short-term solar irradiation and wind speed was developed and evaluated in MATLAB.To create a deep stacked S2SAE prediction model,a deep Bi-LSTM-based encoder and decoder are stacked on top of one another to reduce the dimension of the input sequence,extract its features,and then reconstruct it to produce the forecasts.Hyperparameters of the proposed deep stacked S2SAE forecasting model were optimized using the Bayesian optimization algorithm.Moreover,the forecasting performance of the proposed Bi-LSTM-based deep stacked S2SAE model was compared to three other deep,and shallow stacked S2SAEs,i.e.,the LSTM-based deep stacked S2SAE model,gated recurrent unit-based deep stacked S2SAE model,and Bi-LSTM-based shallow stacked S2SAE model.All these models were also optimized and modeled in MATLAB.The results simulated based on actual data confirmed that the proposed model outperformed the alternatives by achieving an accuracy of up to 99.7%,which evidenced the high reliability of the proposed forecasting.展开更多
Graph embedding aims to map the high-dimensional nodes to a low-dimensional space and learns the graph relationship from its latent representations.Most existing graph embedding methods focus on the topological struct...Graph embedding aims to map the high-dimensional nodes to a low-dimensional space and learns the graph relationship from its latent representations.Most existing graph embedding methods focus on the topological structure of graph data,but ignore the semantic information of graph data,which results in the unsatisfied performance in practical applications.To overcome the problem,this paper proposes a novel deep convolutional adversarial graph autoencoder(GAE)model.To embed the semantic information between nodes in the graph data,the random walk strategy is first used to construct the positive pointwise mutual information(PPMI)matrix,then,graph convolutional net-work(GCN)is employed to encode the PPMI matrix and node content into the latent representation.Finally,the learned latent representation is used to reconstruct the topological structure of the graph data by decoder.Furthermore,the deep convolutional adversarial training algorithm is introduced to make the learned latent representation conform to the prior distribution better.The state-of-the-art experimental results on the graph data validate the effectiveness of the proposed model in the link prediction,node clustering and graph visualization tasks for three standard datasets,Cora,Citeseer and Pubmed.展开更多
The rapid growth of biomedical data,particularly multi-omics data including genomes,transcriptomics,proteomics,metabolomics,and epigenomics,medical research and clinical decision-making confront both new opportunities...The rapid growth of biomedical data,particularly multi-omics data including genomes,transcriptomics,proteomics,metabolomics,and epigenomics,medical research and clinical decision-making confront both new opportunities and obstacles.The huge and diversified nature of these datasets cannot always be managed using traditional data analysis methods.As a consequence,deep learning has emerged as a strong tool for analysing numerous omics data due to its ability to handle complex and non-linear relationships.This paper explores the fundamental concepts of deep learning and how they are used in multi-omics medical data mining.We demonstrate how autoencoders,variational autoencoders,multimodal models,attention mechanisms,transformers,and graph neural networks enable pattern analysis and recognition across all omics data.Deep learning has been found to be effective in illness classification,biomarker identification,gene network learning,and therapeutic efficacy prediction.We also consider critical problems like as data quality,model explainability,whether findings can be repeated,and computational power requirements.We now consider future elements of combining omics with clinical and imaging data,explainable AI,federated learning,and real-time diagnostics.Overall,this study emphasises the need of collaborating across disciplines to advance deep learning-based multi-omics research for precision medicine and comprehending complicated disorders.展开更多
Many types of real-world information systems, including social media and e-commerce platforms, can be modelled by means of attribute-rich, connected networks. The goal of anomaly detection in artificial intelligence i...Many types of real-world information systems, including social media and e-commerce platforms, can be modelled by means of attribute-rich, connected networks. The goal of anomaly detection in artificial intelligence is to identify illustrations that deviate significantly from the main distribution of data or that differ from known cases. Anomalous nodes in node-attributed networks can be identified with greater precision if both graph and node attributes are taken into account. Almost all of the studies in this area focus on supervised techniques for spotting outliers. While supervised algorithms for anomaly detection work well in theory, they cannot be applied to real-world applications owing to a lack of labelled data. Considering the possible data distribution, our model employs a dual variational autoencoder (VAE), while a generative adversarial network (GAN) assures that the model is robust to adversarial training. The dual VAEs are used in another capacity: as a fake-node generator. Adversarial training is used to ensure that our latent codes have a Gaussian or uniform distribution. To provide a fair presentation of the graph, the discriminator instructs the generator to generate latent variables with distributions that are more consistent with the actual distribution of the data. Once the model has been learned, the discriminator is used for anomaly detection via reconstruction loss which has been trained to distinguish between the normal and artificial distributions of data. First, using a dual VAE, our model simultaneously captures cross-modality interactions between topological structure and node characteristics and overcomes the problem of unlabeled anomalies, allowing us to better understand the network sparsity and nonlinearity. Second, the proposed model considers the regularization of the latent codes while solving the issue of unregularized embedding techniques that can quickly lead to unsatisfactory representation. Finally, we use the discriminator reconstruction loss for anomaly detection as the discriminator is well-trained to separate the normal and generated data distributions because reconstruction-based loss does not include the adversarial component. Experiments conducted on attributed networks demonstrate the effectiveness of the proposed model and show that it greatly surpasses the previous methods. The area under the curve scores of our proposed model for the BlogCatalog, Flickr, and Enron datasets are 0.83680, 0.82020, and 0.71180, respectively, proving the effectiveness of the proposed model. The result of the proposed model on the Enron dataset is slightly worse than other models;we attribute this to the dataset’s low dimensionality as the most probable explanation.展开更多
Human Activity Recognition(HAR)is an active research area due to its applications in pervasive computing,human-computer interaction,artificial intelligence,health care,and social sciences.Moreover,dynamic environments...Human Activity Recognition(HAR)is an active research area due to its applications in pervasive computing,human-computer interaction,artificial intelligence,health care,and social sciences.Moreover,dynamic environments and anthropometric differences between individuals make it harder to recognize actions.This study focused on human activity in video sequences acquired with an RGB camera because of its vast range of real-world applications.It uses two-stream ConvNet to extract spatial and temporal information and proposes a fine-tuned deep neural network.Moreover,the transfer learning paradigm is adopted to extract varied and fixed frames while reusing object identification information.Six state-of-the-art pre-trained models are exploited to find the best model for spatial feature extraction.For temporal sequence,this study uses dense optical flow following the two-stream ConvNet and Bidirectional Long Short TermMemory(BiLSTM)to capture longtermdependencies.Two state-of-the-art datasets,UCF101 and HMDB51,are used for evaluation purposes.In addition,seven state-of-the-art optimizers are used to fine-tune the proposed network parameters.Furthermore,this study utilizes an ensemble mechanism to aggregate spatial-temporal features using a four-stream Convolutional Neural Network(CNN),where two streams use RGB data.In contrast,the other uses optical flow images.Finally,the proposed ensemble approach using max hard voting outperforms state-ofthe-art methods with 96.30%and 90.07%accuracies on the UCF101 and HMDB51 datasets.展开更多
Software defect prediction plays an important role in software quality assurance.However,the performance of the prediction model is susceptible to the irrelevant and redundant features.In addition,previous studies mos...Software defect prediction plays an important role in software quality assurance.However,the performance of the prediction model is susceptible to the irrelevant and redundant features.In addition,previous studies mostly regard software defect prediction as a single objective optimization problem,and multi-objective software defect prediction has not been thoroughly investigated.For the above two reasons,we propose the following solutions in this paper:(1)we leverage an advanced deep neural network-Stacked Contractive AutoEncoder(SCAE)to extract the robust deep semantic features from the original defect features,which has stronger discrimination capacity for different classes(defective or non-defective).(2)we propose a novel multi-objective defect prediction model named SMONGE that utilizes the Multi-Objective NSGAII algorithm to optimize the advanced neural network-Extreme learning machine(ELM)based on state-of-the-art Pareto optimal solutions according to the features extracted by SCAE.We mainly consider two objectives.One objective is to maximize the performance of ELM,which refers to the benefit of the SMONGE model.Another objective is to minimize the output weight norm of ELM,which is related to the cost of the SMONGE model.We compare the SCAE with six state-of-the-art feature extraction methods and compare the SMONGE model with multiple baseline models that contain four classic defect predictors and the MONGE model without SCAE across 20 open source software projects.The experimental results verify that the superiority of SCAE and SMONGE on seven evaluation metrics.展开更多
The rapid growth of Internet of Things(IoT)devices has brought numerous benefits to the interconnected world.However,the ubiquitous nature of IoT networks exposes them to various security threats,including anomaly int...The rapid growth of Internet of Things(IoT)devices has brought numerous benefits to the interconnected world.However,the ubiquitous nature of IoT networks exposes them to various security threats,including anomaly intrusion attacks.In addition,IoT devices generate a high volume of unstructured data.Traditional intrusion detection systems often struggle to cope with the unique characteristics of IoT networks,such as resource constraints and heterogeneous data sources.Given the unpredictable nature of network technologies and diverse intrusion methods,conventional machine-learning approaches seem to lack efficiency.Across numerous research domains,deep learning techniques have demonstrated their capability to precisely detect anomalies.This study designs and enhances a novel anomaly-based intrusion detection system(AIDS)for IoT networks.Firstly,a Sparse Autoencoder(SAE)is applied to reduce the high dimension and get a significant data representation by calculating the reconstructed error.Secondly,the Convolutional Neural Network(CNN)technique is employed to create a binary classification approach.The proposed SAE-CNN approach is validated using the Bot-IoT dataset.The proposed models exceed the performance of the existing deep learning approach in the literature with an accuracy of 99.9%,precision of 99.9%,recall of 100%,F1 of 99.9%,False Positive Rate(FPR)of 0.0003,and True Positive Rate(TPR)of 0.9992.In addition,alternative metrics,such as training and testing durations,indicated that SAE-CNN performs better.展开更多
The proliferation of Internet of Things(IoT)technology has exponentially increased the number of devices interconnected over networks,thereby escalating the potential vectors for cybersecurity threats.In response,this...The proliferation of Internet of Things(IoT)technology has exponentially increased the number of devices interconnected over networks,thereby escalating the potential vectors for cybersecurity threats.In response,this study rigorously applies and evaluates deep learning models—namely Convolutional Neural Networks(CNN),Autoencoders,and Long Short-Term Memory(LSTM)networks—to engineer an advanced Intrusion Detection System(IDS)specifically designed for IoT environments.Utilizing the comprehensive UNSW-NB15 dataset,which encompasses 49 distinct features representing varied network traffic characteristics,our methodology focused on meticulous data preprocessing including cleaning,normalization,and strategic feature selection to enhance model performance.A robust comparative analysis highlights the CNN model’s outstanding performance,achieving an accuracy of 99.89%,precision of 99.90%,recall of 99.88%,and an F1 score of 99.89%in binary classification tasks,outperforming other evaluated models significantly.These results not only confirm the superior detection capabilities of CNNs in distinguishing between benign and malicious network activities but also illustrate the model’s effectiveness in multiclass classification tasks,addressing various attack vectors prevalent in IoT setups.The empirical findings from this research demonstrate deep learning’s transformative potential in fortifying network security infrastructures against sophisticated cyber threats,providing a scalable,high-performance solution that enhances security measures across increasingly complex IoT ecosystems.This study’s outcomes are critical for security practitioners and researchers focusing on the next generation of cyber defense mechanisms,offering a data-driven foundation for future advancements in IoT security strategies.展开更多
With the rapid development of deep learning methods, the data-driven approach has shown powerful advantages over the model-driven one. In this paper, we propose an end-to-end autoencoder communication system based on ...With the rapid development of deep learning methods, the data-driven approach has shown powerful advantages over the model-driven one. In this paper, we propose an end-to-end autoencoder communication system based on Deep Residual Shrinkage Networks (DRSNs), where neural networks (DNNs) are used to implement the coding, decoding, modulation and demodulation functions of the communication system. Our proposed autoencoder communication system can better reduce the signal noise by adding an “attention mechanism” and “soft thresholding” modules and has better performance at various signal-to-noise ratios (SNR). Also, we have shown through comparative experiments that the system can operate at moderate block lengths and support different throughputs. It has been shown to work efficiently in the AWGN channel. Simulation results show that our model has a higher Bit-Error-Rate (BER) gain and greatly improved decoding performance compared to conventional modulation and classical autoencoder systems at various signal-to-noise ratios.展开更多
During its growth stage,the plant is exposed to various diseases.Detection and early detection of crop diseases is amajor challenge in the horticulture industry.Crop infections can harmtotal crop yield and reduce farm...During its growth stage,the plant is exposed to various diseases.Detection and early detection of crop diseases is amajor challenge in the horticulture industry.Crop infections can harmtotal crop yield and reduce farmers’income if not identified early.Today’s approved method involves a professional plant pathologist to diagnose the disease by visual inspection of the afflicted plant leaves.This is an excellent use case for Community Assessment and Treatment Services(CATS)due to the lengthy manual disease diagnosis process and the accuracy of identification is directly proportional to the skills of pathologists.An alternative to conventional Machine Learning(ML)methods,which require manual identification of parameters for exact results,is to develop a prototype that can be classified without pre-processing.To automatically diagnose tomato leaf disease,this research proposes a hybrid model using the Convolutional Auto-Encoders(CAE)network and the CNN-based deep learning architecture of DenseNet.To date,none of the modern systems described in this paper have a combined model based on DenseNet,CAE,and ConvolutionalNeuralNetwork(CNN)todiagnose the ailments of tomato leaves automatically.Themodelswere trained on a dataset obtained from the Plant Village repository.The dataset consisted of 9920 tomato leaves,and the model-tomodel accuracy ratio was 98.35%.Unlike other approaches discussed in this paper,this hybrid strategy requires fewer training components.Therefore,the training time to classify plant diseases with the trained algorithm,as well as the training time to automatically detect the ailments of tomato leaves,is significantly reduced.展开更多
3D medical image reconstruction has significantly enhanced diagnostic accuracy,yet the reliance on densely sampled projection data remains a major limitation in clinical practice.Sparse-angle X-ray imaging,though safe...3D medical image reconstruction has significantly enhanced diagnostic accuracy,yet the reliance on densely sampled projection data remains a major limitation in clinical practice.Sparse-angle X-ray imaging,though safer and faster,poses challenges for accurate volumetric reconstruction due to limited spatial information.This study proposes a 3D reconstruction neural network based on adaptive weight fusion(AdapFusionNet)to achieve high-quality 3D medical image reconstruction from sparse-angle X-ray images.To address the issue of spatial inconsistency in multi-angle image reconstruction,an innovative adaptive fusion module was designed to score initial reconstruction results during the inference stage and perform weighted fusion,thereby improving the final reconstruction quality.The reconstruction network is built on an autoencoder(AE)framework and uses orthogonal-angle X-ray images(frontal and lateral projections)as inputs.The encoder extracts 2D features,which the decoder maps into 3D space.This study utilizes a lung CT dataset to obtain complete three-dimensional volumetric data,from which digitally reconstructed radiographs(DRR)are generated at various angles to simulate X-ray images.Since real-world clinical X-ray images rarely come with perfectly corresponding 3D“ground truth,”using CT scans as the three-dimensional reference effectively supports the training and evaluation of deep networks for sparse-angle X-ray 3D reconstruction.Experiments conducted on the LIDC-IDRI dataset with simulated X-ray images(DRR images)as training data demonstrate the superior performance of AdapFusionNet compared to other fusion methods.Quantitative results show that AdapFusionNet achieves SSIM,PSNR,and MAE values of 0.332,13.404,and 0.163,respectively,outperforming other methods(SingleViewNet:0.289,12.363,0.182;AvgFusionNet:0.306,13.384,0.159).Qualitative analysis further confirms that AdapFusionNet significantly enhances the reconstruction of lung and chest contours while effectively reducing noise during the reconstruction process.The findings demonstrate that AdapFusionNet offers significant advantages in 3D reconstruction of sparse-angle X-ray images.展开更多
文摘The ability to accurately predict urban traffic flows is crucial for optimising city operations.Consequently,various methods for forecasting urban traffic have been developed,focusing on analysing historical data to understand complex mobility patterns.Deep learning techniques,such as graph neural networks(GNNs),are popular for their ability to capture spatio-temporal dependencies.However,these models often become overly complex due to the large number of hyper-parameters involved.In this study,we introduce Dynamic Multi-Graph Spatial-Temporal Graph Neural Ordinary Differential Equation Networks(DMST-GNODE),a framework based on ordinary differential equations(ODEs)that autonomously discovers effective spatial-temporal graph neural network(STGNN)architectures for traffic prediction tasks.The comparative analysis of DMST-GNODE and baseline models indicates that DMST-GNODE model demonstrates superior performance across multiple datasets,consistently achieving the lowest Root Mean Square Error(RMSE)and Mean Absolute Error(MAE)values,alongside the highest accuracy.On the BKK(Bangkok)dataset,it outperformed other models with an RMSE of 3.3165 and an accuracy of 0.9367 for a 20-min interval,maintaining this trend across 40 and 60 min.Similarly,on the PeMS08 dataset,DMST-GNODE achieved the best performance with an RMSE of 19.4863 and an accuracy of 0.9377 at 20 min,demonstrating its effectiveness over longer periods.The Los_Loop dataset results further emphasise this model’s advantage,with an RMSE of 3.3422 and an accuracy of 0.7643 at 20 min,consistently maintaining superiority across all time intervals.These numerical highlights indicate that DMST-GNODE not only outperforms baseline models but also achieves higher accuracy and lower errors across different time intervals and datasets.
基金Supported by National Natural Science Foundation of China(Grant Nos.52005103,71801046,51775112,51975121)Guangdong Province Basic and Applied Basic Research Foundation of China(Grant No.2019B1515120095)+1 种基金Intelligent Manufacturing PHM Innovation Team Program(Grant Nos.2018KCXTD029,TDYB2019010)MoST International Cooperation Program(6-14).
文摘Supervised fault diagnosis typically assumes that all the types of machinery failures are known.However,in practice unknown types of defect,i.e.,novelties,may occur,whose detection is a challenging task.In this paper,a novel fault diagnostic method is developed for both diagnostics and detection of novelties.To this end,a sparse autoencoder-based multi-head Deep Neural Network(DNN)is presented to jointly learn a shared encoding representation for both unsupervised reconstruction and supervised classification of the monitoring data.The detection of novelties is based on the reconstruction error.Moreover,the computational burden is reduced by directly training the multi-head DNN with rectified linear unit activation function,instead of performing the pre-training and fine-tuning phases required for classical DNNs.The addressed method is applied to a benchmark bearing case study and to experimental data acquired from a delta 3D printer.The results show that its performance is satisfactory both in detection of novelties and fault diagnosis,outperforming other state-of-the-art methods.This research proposes a novel fault diagnostics method which can not only diagnose the known type of defect,but also detect unknown types of defects.
基金supported by the basic science research program through the National Research Foundation of Korea(NRF)(2020R1F1A1073395)the basic research project of the Korea Institute of Geoscience and Mineral Resources(KIGAM)(GP2021-011,GP2020-031,21-3117)funded by the Ministry of Science and ICT,Korea。
文摘This paper presents an innovative data-integration that uses an iterative-learning method,a deep neural network(DNN)coupled with a stacked autoencoder(SAE)to solve issues encountered with many-objective history matching.The proposed method consists of a DNN-based inverse model with SAE-encoded static data and iterative updates of supervised-learning data are based on distance-based clustering schemes.DNN functions as an inverse model and results in encoded flattened data,while SAE,as a pre-trained neural network,successfully reduces dimensionality and reliably reconstructs geomodels.The iterative-learning method can improve the training data for DNN by showing the error reduction achieved with each iteration step.The proposed workflow shows the small mean absolute percentage error below 4%for all objective functions,while a typical multi-objective evolutionary algorithm fails to significantly reduce the initial population uncertainty.Iterative learning-based manyobjective history matching estimates the trends in water cuts that are not reliably included in dynamicdata matching.This confirms the proposed workflow constructs more plausible geo-models.The workflow would be a reliable alternative to overcome the less-convergent Pareto-based multi-objective evolutionary algorithm in the presence of geological uncertainty and varying objective functions.
基金supported in part by the Science and Technology Project of Hebei Education Department(No.ZD2021088)in part by the S&T Major Project of the Science and Technology Ministry of China(No.2017YFE0135700)。
文摘Spatio-temporal cellular network traffic prediction at wide-area level plays an important role in resource reconfiguration,traffic scheduling and intrusion detection,thus potentially supporting connected intelligence of the sixth generation of mobile communications technology(6G).However,the existing studies just focus on the spatio-temporal modeling of traffic data of single network service,such as short message,call,or Internet.It is not conducive to accurate prediction of traffic data,characterised by diverse network service,spatio-temporality and supersize volume.To address this issue,a novel multi-task deep learning framework is developed for citywide cellular network traffic prediction.Functionally,this framework mainly consists of a dual modular feature sharing layer and a multi-task learning layer(DMFS-MT).The former aims at mining long-term spatio-temporal dependencies and local spatio-temporal fluctuation trends in data,respectively,via a new combination of convolutional gated recurrent unit(ConvGRU)and 3-dimensional convolutional neural network(3D-CNN).For the latter,each task is performed for predicting service-specific traffic data based on a fully connected network.On the real-world Telecom Italia dataset,simulation results demonstrate the effectiveness of our proposal through prediction performance measure,spatial pattern comparison and statistical distribution verification.
基金National Natural Science Foundation of China,Grant/Award Numbers:62173236,61876110,61806130,61976142,82304204.
文摘Network embedding(NE)tries to learn the potential properties of complex networks represented in a low-dimensional feature space.However,the existing deep learningbased NE methods are time-consuming as they need to train a dense architecture for deep neural networks with extensive unknown weight parameters.A sparse deep autoencoder(called SPDNE)for dynamic NE is proposed,aiming to learn the network structures while preserving the node evolution with a low computational complexity.SPDNE tries to use an optimal sparse architecture to replace the fully connected architecture in the deep autoencoder while maintaining the performance of these models in the dynamic NE.Then,an adaptive simulated algorithm to find the optimal sparse architecture for the deep autoencoder is proposed.The performance of SPDNE over three dynamical NE models(i.e.sparse architecture-based deep autoencoder method,DynGEM,and ElvDNE)is evaluated on three well-known benchmark networks and five real-world networks.The experimental results demonstrate that SPDNE can reduce about 70%of weight parameters of the architecture for the deep autoencoder during the training process while preserving the performance of these dynamical NE models.The results also show that SPDNE achieves the highest accuracy on 72 out of 96 edge prediction and network reconstruction tasks compared with the state-of-the-art dynamical NE algorithms.
基金This work was supported by the Research Deanship of Prince Sattam Bin Abdulaziz University,Al-Kharj,Saudi Arabia(Grant No.2020/01/17215).Also,the author thanks Deanship of college of computer engineering and sciences for technical support provided to complete the project successfully。
文摘In the era of Big data,learning discriminant feature representation from network traffic is identified has as an invariably essential task for improving the detection ability of an intrusion detection system(IDS).Owing to the lack of accurately labeled network traffic data,many unsupervised feature representation learning models have been proposed with state-of-theart performance.Yet,these models fail to consider the classification error while learning the feature representation.Intuitively,the learnt feature representation may degrade the performance of the classification task.For the first time in the field of intrusion detection,this paper proposes an unsupervised IDS model leveraging the benefits of deep autoencoder(DAE)for learning the robust feature representation and one-class support vector machine(OCSVM)for finding the more compact decision hyperplane for intrusion detection.Specially,the proposed model defines a new unified objective function to minimize the reconstruction and classification error simultaneously.This unique contribution not only enables the model to support joint learning for feature representation and classifier training but also guides to learn the robust feature representation which can improve the discrimination ability of the classifier for intrusion detection.Three set of evaluation experiments are conducted to demonstrate the potential of the proposed model.First,the ablation evaluation on benchmark dataset,NSL-KDD validates the design decision of the proposed model.Next,the performance evaluation on recent intrusion dataset,UNSW-NB15 signifies the stable performance of the proposed model.Finally,the comparative evaluation verifies the efficacy of the proposed model against recently published state-of-the-art methods.
文摘Wind and solar energy are two popular forms of renewable energy used in microgrids and facilitating the transition towards net-zero carbon emissions by 2050.However,they are exceedingly unpredictable since they rely highly on weather and atmospheric conditions.In microgrids,smart energy management systems,such as integrated demand response programs,are permanently established on a step-ahead basis,which means that accu-rate forecasting of wind speed and solar irradiance intervals is becoming increasingly crucial to the optimal operation and planning of microgrids.With this in mind,a novel“bidirectional long short-term memory network”(Bi-LSTM)-based,deep stacked,sequence-to-sequence autoencoder(S2SAE)forecasting model for predicting short-term solar irradiation and wind speed was developed and evaluated in MATLAB.To create a deep stacked S2SAE prediction model,a deep Bi-LSTM-based encoder and decoder are stacked on top of one another to reduce the dimension of the input sequence,extract its features,and then reconstruct it to produce the forecasts.Hyperparameters of the proposed deep stacked S2SAE forecasting model were optimized using the Bayesian optimization algorithm.Moreover,the forecasting performance of the proposed Bi-LSTM-based deep stacked S2SAE model was compared to three other deep,and shallow stacked S2SAEs,i.e.,the LSTM-based deep stacked S2SAE model,gated recurrent unit-based deep stacked S2SAE model,and Bi-LSTM-based shallow stacked S2SAE model.All these models were also optimized and modeled in MATLAB.The results simulated based on actual data confirmed that the proposed model outperformed the alternatives by achieving an accuracy of up to 99.7%,which evidenced the high reliability of the proposed forecasting.
基金Supported by the Strategy Priority Research Program of Chinese Academy of Sciences(No.XDC02070600).
文摘Graph embedding aims to map the high-dimensional nodes to a low-dimensional space and learns the graph relationship from its latent representations.Most existing graph embedding methods focus on the topological structure of graph data,but ignore the semantic information of graph data,which results in the unsatisfied performance in practical applications.To overcome the problem,this paper proposes a novel deep convolutional adversarial graph autoencoder(GAE)model.To embed the semantic information between nodes in the graph data,the random walk strategy is first used to construct the positive pointwise mutual information(PPMI)matrix,then,graph convolutional net-work(GCN)is employed to encode the PPMI matrix and node content into the latent representation.Finally,the learned latent representation is used to reconstruct the topological structure of the graph data by decoder.Furthermore,the deep convolutional adversarial training algorithm is introduced to make the learned latent representation conform to the prior distribution better.The state-of-the-art experimental results on the graph data validate the effectiveness of the proposed model in the link prediction,node clustering and graph visualization tasks for three standard datasets,Cora,Citeseer and Pubmed.
文摘The rapid growth of biomedical data,particularly multi-omics data including genomes,transcriptomics,proteomics,metabolomics,and epigenomics,medical research and clinical decision-making confront both new opportunities and obstacles.The huge and diversified nature of these datasets cannot always be managed using traditional data analysis methods.As a consequence,deep learning has emerged as a strong tool for analysing numerous omics data due to its ability to handle complex and non-linear relationships.This paper explores the fundamental concepts of deep learning and how they are used in multi-omics medical data mining.We demonstrate how autoencoders,variational autoencoders,multimodal models,attention mechanisms,transformers,and graph neural networks enable pattern analysis and recognition across all omics data.Deep learning has been found to be effective in illness classification,biomarker identification,gene network learning,and therapeutic efficacy prediction.We also consider critical problems like as data quality,model explainability,whether findings can be repeated,and computational power requirements.We now consider future elements of combining omics with clinical and imaging data,explainable AI,federated learning,and real-time diagnostics.Overall,this study emphasises the need of collaborating across disciplines to advance deep learning-based multi-omics research for precision medicine and comprehending complicated disorders.
文摘Many types of real-world information systems, including social media and e-commerce platforms, can be modelled by means of attribute-rich, connected networks. The goal of anomaly detection in artificial intelligence is to identify illustrations that deviate significantly from the main distribution of data or that differ from known cases. Anomalous nodes in node-attributed networks can be identified with greater precision if both graph and node attributes are taken into account. Almost all of the studies in this area focus on supervised techniques for spotting outliers. While supervised algorithms for anomaly detection work well in theory, they cannot be applied to real-world applications owing to a lack of labelled data. Considering the possible data distribution, our model employs a dual variational autoencoder (VAE), while a generative adversarial network (GAN) assures that the model is robust to adversarial training. The dual VAEs are used in another capacity: as a fake-node generator. Adversarial training is used to ensure that our latent codes have a Gaussian or uniform distribution. To provide a fair presentation of the graph, the discriminator instructs the generator to generate latent variables with distributions that are more consistent with the actual distribution of the data. Once the model has been learned, the discriminator is used for anomaly detection via reconstruction loss which has been trained to distinguish between the normal and artificial distributions of data. First, using a dual VAE, our model simultaneously captures cross-modality interactions between topological structure and node characteristics and overcomes the problem of unlabeled anomalies, allowing us to better understand the network sparsity and nonlinearity. Second, the proposed model considers the regularization of the latent codes while solving the issue of unregularized embedding techniques that can quickly lead to unsatisfactory representation. Finally, we use the discriminator reconstruction loss for anomaly detection as the discriminator is well-trained to separate the normal and generated data distributions because reconstruction-based loss does not include the adversarial component. Experiments conducted on attributed networks demonstrate the effectiveness of the proposed model and show that it greatly surpasses the previous methods. The area under the curve scores of our proposed model for the BlogCatalog, Flickr, and Enron datasets are 0.83680, 0.82020, and 0.71180, respectively, proving the effectiveness of the proposed model. The result of the proposed model on the Enron dataset is slightly worse than other models;we attribute this to the dataset’s low dimensionality as the most probable explanation.
基金This work was supported by financial support from Universiti Sains Malaysia(USM)under FRGS grant number FRGS/1/2020/TK03/USM/02/1the School of Computer Sciences USM for their support.
文摘Human Activity Recognition(HAR)is an active research area due to its applications in pervasive computing,human-computer interaction,artificial intelligence,health care,and social sciences.Moreover,dynamic environments and anthropometric differences between individuals make it harder to recognize actions.This study focused on human activity in video sequences acquired with an RGB camera because of its vast range of real-world applications.It uses two-stream ConvNet to extract spatial and temporal information and proposes a fine-tuned deep neural network.Moreover,the transfer learning paradigm is adopted to extract varied and fixed frames while reusing object identification information.Six state-of-the-art pre-trained models are exploited to find the best model for spatial feature extraction.For temporal sequence,this study uses dense optical flow following the two-stream ConvNet and Bidirectional Long Short TermMemory(BiLSTM)to capture longtermdependencies.Two state-of-the-art datasets,UCF101 and HMDB51,are used for evaluation purposes.In addition,seven state-of-the-art optimizers are used to fine-tune the proposed network parameters.Furthermore,this study utilizes an ensemble mechanism to aggregate spatial-temporal features using a four-stream Convolutional Neural Network(CNN),where two streams use RGB data.In contrast,the other uses optical flow images.Finally,the proposed ensemble approach using max hard voting outperforms state-ofthe-art methods with 96.30%and 90.07%accuracies on the UCF101 and HMDB51 datasets.
基金This work is supported in part by the National Science Foundation of China(Grant Nos.61672392,61373038)in part by the National Key Research and Development Program of China(Grant No.2016YFC1202204).
文摘Software defect prediction plays an important role in software quality assurance.However,the performance of the prediction model is susceptible to the irrelevant and redundant features.In addition,previous studies mostly regard software defect prediction as a single objective optimization problem,and multi-objective software defect prediction has not been thoroughly investigated.For the above two reasons,we propose the following solutions in this paper:(1)we leverage an advanced deep neural network-Stacked Contractive AutoEncoder(SCAE)to extract the robust deep semantic features from the original defect features,which has stronger discrimination capacity for different classes(defective or non-defective).(2)we propose a novel multi-objective defect prediction model named SMONGE that utilizes the Multi-Objective NSGAII algorithm to optimize the advanced neural network-Extreme learning machine(ELM)based on state-of-the-art Pareto optimal solutions according to the features extracted by SCAE.We mainly consider two objectives.One objective is to maximize the performance of ELM,which refers to the benefit of the SMONGE model.Another objective is to minimize the output weight norm of ELM,which is related to the cost of the SMONGE model.We compare the SCAE with six state-of-the-art feature extraction methods and compare the SMONGE model with multiple baseline models that contain four classic defect predictors and the MONGE model without SCAE across 20 open source software projects.The experimental results verify that the superiority of SCAE and SMONGE on seven evaluation metrics.
基金Researchers Supporting Project Number(RSP2024R206),King Saud University,Riyadh,Saudi Arabia.
文摘The rapid growth of Internet of Things(IoT)devices has brought numerous benefits to the interconnected world.However,the ubiquitous nature of IoT networks exposes them to various security threats,including anomaly intrusion attacks.In addition,IoT devices generate a high volume of unstructured data.Traditional intrusion detection systems often struggle to cope with the unique characteristics of IoT networks,such as resource constraints and heterogeneous data sources.Given the unpredictable nature of network technologies and diverse intrusion methods,conventional machine-learning approaches seem to lack efficiency.Across numerous research domains,deep learning techniques have demonstrated their capability to precisely detect anomalies.This study designs and enhances a novel anomaly-based intrusion detection system(AIDS)for IoT networks.Firstly,a Sparse Autoencoder(SAE)is applied to reduce the high dimension and get a significant data representation by calculating the reconstructed error.Secondly,the Convolutional Neural Network(CNN)technique is employed to create a binary classification approach.The proposed SAE-CNN approach is validated using the Bot-IoT dataset.The proposed models exceed the performance of the existing deep learning approach in the literature with an accuracy of 99.9%,precision of 99.9%,recall of 100%,F1 of 99.9%,False Positive Rate(FPR)of 0.0003,and True Positive Rate(TPR)of 0.9992.In addition,alternative metrics,such as training and testing durations,indicated that SAE-CNN performs better.
文摘The proliferation of Internet of Things(IoT)technology has exponentially increased the number of devices interconnected over networks,thereby escalating the potential vectors for cybersecurity threats.In response,this study rigorously applies and evaluates deep learning models—namely Convolutional Neural Networks(CNN),Autoencoders,and Long Short-Term Memory(LSTM)networks—to engineer an advanced Intrusion Detection System(IDS)specifically designed for IoT environments.Utilizing the comprehensive UNSW-NB15 dataset,which encompasses 49 distinct features representing varied network traffic characteristics,our methodology focused on meticulous data preprocessing including cleaning,normalization,and strategic feature selection to enhance model performance.A robust comparative analysis highlights the CNN model’s outstanding performance,achieving an accuracy of 99.89%,precision of 99.90%,recall of 99.88%,and an F1 score of 99.89%in binary classification tasks,outperforming other evaluated models significantly.These results not only confirm the superior detection capabilities of CNNs in distinguishing between benign and malicious network activities but also illustrate the model’s effectiveness in multiclass classification tasks,addressing various attack vectors prevalent in IoT setups.The empirical findings from this research demonstrate deep learning’s transformative potential in fortifying network security infrastructures against sophisticated cyber threats,providing a scalable,high-performance solution that enhances security measures across increasingly complex IoT ecosystems.This study’s outcomes are critical for security practitioners and researchers focusing on the next generation of cyber defense mechanisms,offering a data-driven foundation for future advancements in IoT security strategies.
文摘With the rapid development of deep learning methods, the data-driven approach has shown powerful advantages over the model-driven one. In this paper, we propose an end-to-end autoencoder communication system based on Deep Residual Shrinkage Networks (DRSNs), where neural networks (DNNs) are used to implement the coding, decoding, modulation and demodulation functions of the communication system. Our proposed autoencoder communication system can better reduce the signal noise by adding an “attention mechanism” and “soft thresholding” modules and has better performance at various signal-to-noise ratios (SNR). Also, we have shown through comparative experiments that the system can operate at moderate block lengths and support different throughputs. It has been shown to work efficiently in the AWGN channel. Simulation results show that our model has a higher Bit-Error-Rate (BER) gain and greatly improved decoding performance compared to conventional modulation and classical autoencoder systems at various signal-to-noise ratios.
基金funded by UKRI EPSRC Grant EP/W020408/1 Project SPRITE+2:The Security,Privacy,Identity,and Trust Engagement Network plus(phase 2)for this studyfunded by PhD project RS718 on Explainable AI through the UKRI EPSRC Grant-funded Doctoral Training Centre at Swansea University.
文摘During its growth stage,the plant is exposed to various diseases.Detection and early detection of crop diseases is amajor challenge in the horticulture industry.Crop infections can harmtotal crop yield and reduce farmers’income if not identified early.Today’s approved method involves a professional plant pathologist to diagnose the disease by visual inspection of the afflicted plant leaves.This is an excellent use case for Community Assessment and Treatment Services(CATS)due to the lengthy manual disease diagnosis process and the accuracy of identification is directly proportional to the skills of pathologists.An alternative to conventional Machine Learning(ML)methods,which require manual identification of parameters for exact results,is to develop a prototype that can be classified without pre-processing.To automatically diagnose tomato leaf disease,this research proposes a hybrid model using the Convolutional Auto-Encoders(CAE)network and the CNN-based deep learning architecture of DenseNet.To date,none of the modern systems described in this paper have a combined model based on DenseNet,CAE,and ConvolutionalNeuralNetwork(CNN)todiagnose the ailments of tomato leaves automatically.Themodelswere trained on a dataset obtained from the Plant Village repository.The dataset consisted of 9920 tomato leaves,and the model-tomodel accuracy ratio was 98.35%.Unlike other approaches discussed in this paper,this hybrid strategy requires fewer training components.Therefore,the training time to classify plant diseases with the trained algorithm,as well as the training time to automatically detect the ailments of tomato leaves,is significantly reduced.
基金Supported by Sichuan Science and Technology Program(2023YFSY0026,2023YFH0004).
文摘3D medical image reconstruction has significantly enhanced diagnostic accuracy,yet the reliance on densely sampled projection data remains a major limitation in clinical practice.Sparse-angle X-ray imaging,though safer and faster,poses challenges for accurate volumetric reconstruction due to limited spatial information.This study proposes a 3D reconstruction neural network based on adaptive weight fusion(AdapFusionNet)to achieve high-quality 3D medical image reconstruction from sparse-angle X-ray images.To address the issue of spatial inconsistency in multi-angle image reconstruction,an innovative adaptive fusion module was designed to score initial reconstruction results during the inference stage and perform weighted fusion,thereby improving the final reconstruction quality.The reconstruction network is built on an autoencoder(AE)framework and uses orthogonal-angle X-ray images(frontal and lateral projections)as inputs.The encoder extracts 2D features,which the decoder maps into 3D space.This study utilizes a lung CT dataset to obtain complete three-dimensional volumetric data,from which digitally reconstructed radiographs(DRR)are generated at various angles to simulate X-ray images.Since real-world clinical X-ray images rarely come with perfectly corresponding 3D“ground truth,”using CT scans as the three-dimensional reference effectively supports the training and evaluation of deep networks for sparse-angle X-ray 3D reconstruction.Experiments conducted on the LIDC-IDRI dataset with simulated X-ray images(DRR images)as training data demonstrate the superior performance of AdapFusionNet compared to other fusion methods.Quantitative results show that AdapFusionNet achieves SSIM,PSNR,and MAE values of 0.332,13.404,and 0.163,respectively,outperforming other methods(SingleViewNet:0.289,12.363,0.182;AvgFusionNet:0.306,13.384,0.159).Qualitative analysis further confirms that AdapFusionNet significantly enhances the reconstruction of lung and chest contours while effectively reducing noise during the reconstruction process.The findings demonstrate that AdapFusionNet offers significant advantages in 3D reconstruction of sparse-angle X-ray images.