Smart grid is envisaged as a power grid that is extremely reliable and flexible.The electrical grid has wide-area measuring devices like Phasor measurement units(PMUs)deployed to provide real-time grid information and...Smart grid is envisaged as a power grid that is extremely reliable and flexible.The electrical grid has wide-area measuring devices like Phasor measurement units(PMUs)deployed to provide real-time grid information and resolve issues effectively and speedily without compromising system availability.The development and application of machine learning approaches for power system protection and state estimation have been facilitated by the availability of measurement data.This research proposes a transmission line fault detection and classification(FD&C)system based on an auto-encoder neural network.A comparison between a Multi-Layer Extreme Learning Machine(ML-ELM)network model and a Stacked Auto-Encoder neural network(SAE)is made.Additionally,the performance of the models developed is compared to that of state-of-the-art classifier models employing feature datasets acquired by wavelet transform based feature extraction as well as other deep learning models.With substantially shorter testing time,the suggested auto-encoder models detect faults with 100% accuracy and classify faults with 99.92% and 99.79%accuracy.The computational efficiency of the ML-ELM model is demonstrated with high accuracy of classification with training time and testing time less than 50 ms.To emulate real system scenarios the models are developed with datasets with noise with signal-to-noise-ratio(SNR)ranging from 10 dB to 40 dB.The efficacy of the models is demonstrated with data from the IEEE 39 bus test system.展开更多
This paper presents an innovative data-integration that uses an iterative-learning method,a deep neural network(DNN)coupled with a stacked autoencoder(SAE)to solve issues encountered with many-objective history matchi...This paper presents an innovative data-integration that uses an iterative-learning method,a deep neural network(DNN)coupled with a stacked autoencoder(SAE)to solve issues encountered with many-objective history matching.The proposed method consists of a DNN-based inverse model with SAE-encoded static data and iterative updates of supervised-learning data are based on distance-based clustering schemes.DNN functions as an inverse model and results in encoded flattened data,while SAE,as a pre-trained neural network,successfully reduces dimensionality and reliably reconstructs geomodels.The iterative-learning method can improve the training data for DNN by showing the error reduction achieved with each iteration step.The proposed workflow shows the small mean absolute percentage error below 4%for all objective functions,while a typical multi-objective evolutionary algorithm fails to significantly reduce the initial population uncertainty.Iterative learning-based manyobjective history matching estimates the trends in water cuts that are not reliably included in dynamicdata matching.This confirms the proposed workflow constructs more plausible geo-models.The workflow would be a reliable alternative to overcome the less-convergent Pareto-based multi-objective evolutionary algorithm in the presence of geological uncertainty and varying objective functions.展开更多
Spectral and spatial features in remotely sensed data play an irreplaceable role in classifying crop types for precision agriculture. Despite the thriving establishment of the handcrafted features, designing or select...Spectral and spatial features in remotely sensed data play an irreplaceable role in classifying crop types for precision agriculture. Despite the thriving establishment of the handcrafted features, designing or selecting such features valid for specific crop types requires prior knowledge and thus remains an open challenge. Convolutional neural networks(CNNs) can effectively overcome this issue with their advanced ability to generate high-level features automatically but are still inadequate in mining spectral features compared to mining spatial features. This study proposed an enhanced spectral feature called Stacked Spectral Feature Space Patch(SSFSP) for CNN-based crop classification. SSFSP is a stack of twodimensional(2 D) gridded spectral feature images that record various crop types’ spatial and intensity distribution characteristics in a 2 D feature space consisting of two spectral bands. SSFSP can be input into2 D-CNNs to support the simultaneous mining of spectral and spatial features, as the spectral features are successfully converted to 2 D images that can be processed by CNN. We tested the performance of SSFSP by using it as the input to seven CNN models and one multilayer perceptron model for crop type classification compared to using conventional spectral features as input. Using high spatial resolution hyperspectral datasets at three sites, the comparative study demonstrated that SSFSP outperforms conventional spectral features regarding classification accuracy, robustness, and training efficiency. The theoretical analysis summarizes three reasons for its excellent performance. First, SSFSP mines the spectral interrelationship with feature generality, which reduces the required number of training samples.Second, the intra-class variance can be largely reduced by grid partitioning. Third, SSFSP is a highly sparse feature, which reduces the dependence on the CNN model structure and enables early and fast convergence in model training. In conclusion, SSFSP has great potential for practical crop classification in precision agriculture.展开更多
An anomaly-based intrusion detection system(A-IDS)provides a critical aspect in a modern computing infrastructure since new types of attacks can be discovered.It prevalently utilizes several machine learning algorithm...An anomaly-based intrusion detection system(A-IDS)provides a critical aspect in a modern computing infrastructure since new types of attacks can be discovered.It prevalently utilizes several machine learning algorithms(ML)for detecting and classifying network traffic.To date,lots of algorithms have been proposed to improve the detection performance of A-IDS,either using individual or ensemble learners.In particular,ensemble learners have shown remarkable performance over individual learners in many applications,including in cybersecurity domain.However,most existing works still suffer from unsatisfactory results due to improper ensemble design.The aim of this study is to emphasize the effectiveness of stacking ensemble-based model for A-IDS,where deep learning(e.g.,deep neural network[DNN])is used as base learner model.The effectiveness of the proposed model and base DNN model are benchmarked empirically in terms of several performance metrics,i.e.,Matthew’s correlation coefficient,accuracy,and false alarm rate.The results indicate that the proposed model is superior to the base DNN model as well as other existing ML algorithms found in the literature.展开更多
The temperature models of anode and cathode of direct methanol fuel cell (DMFC) stack were established by using radial basis function (RBF) neural networks identification technique to deal with the modeling and co...The temperature models of anode and cathode of direct methanol fuel cell (DMFC) stack were established by using radial basis function (RBF) neural networks identification technique to deal with the modeling and control problem of DMFC stack. An adaptive fuzzy neural networks temperature controller was designed based on the identification models established, and parameters of the controller were regulated by novel back propagation (BP) algorithm. Simulation results show that the RBF neural networks identification modeling method is correct, effective and the models established have good accuracy. Moreover, performance of the adaptive fuzzy neural networks temperature controller designed is superior.展开更多
Icing is an important factor threatening aircraft flight safety.According to the requirements of airworthiness regulations,aircraft icing safety assessment is needed to be carried out based on the ice shapes formed un...Icing is an important factor threatening aircraft flight safety.According to the requirements of airworthiness regulations,aircraft icing safety assessment is needed to be carried out based on the ice shapes formed under different icing conditions.Due to the complexity of the icing process,the rapid assessment of ice shape remains an important challenge.In this paper,an efficient prediction model of aircraft icing is established based on the deep belief network(DBN)and the stacked auto-encoder(SAE),which are all deep neural networks.The detailed network structures are designed and then the networks are trained according to the samples obtained by the icing numerical computation.After that the model is applied on the ice shape evaluation of NACA0012 airfoil.The results show that the model can accurately capture the nonlinear behavior of aircraft icing and thus make an excellent ice shape prediction.The model provides an important tool for aircraft icing analysis.展开更多
Software defect prediction plays an important role in software quality assurance.However,the performance of the prediction model is susceptible to the irrelevant and redundant features.In addition,previous studies mos...Software defect prediction plays an important role in software quality assurance.However,the performance of the prediction model is susceptible to the irrelevant and redundant features.In addition,previous studies mostly regard software defect prediction as a single objective optimization problem,and multi-objective software defect prediction has not been thoroughly investigated.For the above two reasons,we propose the following solutions in this paper:(1)we leverage an advanced deep neural network-Stacked Contractive AutoEncoder(SCAE)to extract the robust deep semantic features from the original defect features,which has stronger discrimination capacity for different classes(defective or non-defective).(2)we propose a novel multi-objective defect prediction model named SMONGE that utilizes the Multi-Objective NSGAII algorithm to optimize the advanced neural network-Extreme learning machine(ELM)based on state-of-the-art Pareto optimal solutions according to the features extracted by SCAE.We mainly consider two objectives.One objective is to maximize the performance of ELM,which refers to the benefit of the SMONGE model.Another objective is to minimize the output weight norm of ELM,which is related to the cost of the SMONGE model.We compare the SCAE with six state-of-the-art feature extraction methods and compare the SMONGE model with multiple baseline models that contain four classic defect predictors and the MONGE model without SCAE across 20 open source software projects.The experimental results verify that the superiority of SCAE and SMONGE on seven evaluation metrics.展开更多
Cardiac disease is a chronic condition that impairs the heart’s functionality.It includes conditions such as coronary artery disease,heart failure,arrhythmias,and valvular heart disease.These conditions can lead to s...Cardiac disease is a chronic condition that impairs the heart’s functionality.It includes conditions such as coronary artery disease,heart failure,arrhythmias,and valvular heart disease.These conditions can lead to serious complications and even be life-threatening if not detected and managed in time.Researchers have utilized Machine Learning(ML)and Deep Learning(DL)to identify heart abnormalities swiftly and consistently.Various approaches have been applied to predict and treat heart disease utilizing ML and DL.This paper proposes a Machine and Deep Learning-based Stacked Model(MDLSM)to predict heart disease accurately.ML approaches such as eXtreme Gradient Boosting(XGB),Random Forest(RF),Naive Bayes(NB),Decision Tree(DT),and KNearest Neighbor(KNN),along with two DL models:Deep Neural Network(DNN)and Fine Tuned Deep Neural Network(FT-DNN)are used to detect heart disease.These models rely on electronic medical data that increases the likelihood of correctly identifying and diagnosing heart disease.Well-known evaluation measures(i.e.,accuracy,precision,recall,F1-score,confusion matrix,and area under the Receiver Operating Characteristic(ROC)curve)are employed to check the efficacy of the proposed approach.Results reveal that the MDLSM achieves 94.14%prediction accuracy,which is 8.30%better than the results from the baseline experiments recommending our proposed approach for identifying and diagnosing heart disease.展开更多
The exponential increase in new coronavirus disease 2019(COVID-19)cases and deaths has made COVID-19 the leading cause of death in many countries.Thus,in this study,we propose an efficient technique for the automatic ...The exponential increase in new coronavirus disease 2019(COVID-19)cases and deaths has made COVID-19 the leading cause of death in many countries.Thus,in this study,we propose an efficient technique for the automatic detection of COVID-19 and pneumonia based on X-ray images.A stacked denoising convolutional autoencoder(SDCA)model was proposed to classify X-ray images into three classes:normal,pneumonia,and COVID-19.The SDCA model was used to obtain a good representation of the input data and extract the relevant features from noisy images.The proposed model’s architecture mainly composed of eight autoencoders,which were fed to two dense layers and SoftMax classifiers.The proposed model was evaluated with 6356 images from the datasets from different sources.The experiments and evaluation of the proposed model were applied to an 80/20 training/validation split and for five cross-validation data splitting,respectively.The metrics used for the SDCA model were the classification accuracy,precision,sensitivity,and specificity for both schemes.Our results demonstrated the superiority of the proposed model in classifying X-ray images with high accuracy of 96.8%.Therefore,this model can help physicians accelerate COVID-19 diagnosis.展开更多
In recent years,computer visionfinds wide applications in maritime surveillance with its sophisticated algorithms and advanced architecture.Auto-matic ship detection with computer vision techniques provide an efficien...In recent years,computer visionfinds wide applications in maritime surveillance with its sophisticated algorithms and advanced architecture.Auto-matic ship detection with computer vision techniques provide an efficient means to monitor as well as track ships in water bodies.Waterways being an important medium of transport require continuous monitoring for protection of national security.The remote sensing satellite images of ships in harbours and water bodies are the image data that aid the neural network models to localize ships and to facilitate early identification of possible threats at sea.This paper proposes a deep learning based model capable enough to classify between ships and no-ships as well as to localize ships in the original images using bounding box tech-nique.Furthermore,classified ships are again segmented with deep learning based auto-encoder model.The proposed model,in terms of classification,provides suc-cessful results generating 99.5%and 99.2%validation and training accuracy respectively.The auto-encoder model also produces 85.1%and 84.2%validation and training accuracies.Moreover the IoU metric of the segmented images is found to be of 0.77 value.The experimental results reveal that the model is accu-rate and can be implemented for automatic ship detection in water bodies consid-ering remote sensing satellite images as input to the computer vision system.展开更多
In network settings,one of the major disadvantages that threaten the network protocols is the insecurity.In most cases,unscrupulous people or bad actors can access information through unsecured connections by planting...In network settings,one of the major disadvantages that threaten the network protocols is the insecurity.In most cases,unscrupulous people or bad actors can access information through unsecured connections by planting software or what we call malicious software otherwise anomalies.The presence of anomalies is also one of the disadvantages,internet users are constantly plagued by virus on their system and get activated when a harmless link is clicked on,this a case of true benign detected as false.Deep learning is very adept at dealing with such cases,but sometimes it has its own faults when dealing benign cases.Here we tend to adopt a dynamic control system(DCSYS)that addresses data packets based on benign scenario to truly report on false benign and exclude anomalies.Its performance is compared with artificial neural network auto-encoders to define its predictive power.Results show that though physical systems can adapt securely,it can be used for network data packets to identify true benign cases.展开更多
The reduction of Hamiltonian systems aims to build smaller reduced models,valid over a certain range of time and parameters,in order to reduce computing time.By maintaining the Hamiltonian structure in the reduced mod...The reduction of Hamiltonian systems aims to build smaller reduced models,valid over a certain range of time and parameters,in order to reduce computing time.By maintaining the Hamiltonian structure in the reduced model,certain longterm stability properties can be preserved.In this paper,we propose a non-linear reduction method for models coming from the spatial discretization of partial differential equations:it is based on convolutional auto-encoders and Hamiltonian neural networks.Their training is coupled in order to learn the encoder-decoder operators and the reduced dynamics simultaneously.Several test cases on non-linear wave dynamics show that the method has better reduction properties than standard linear Hamiltonian reduction methods.展开更多
文摘Smart grid is envisaged as a power grid that is extremely reliable and flexible.The electrical grid has wide-area measuring devices like Phasor measurement units(PMUs)deployed to provide real-time grid information and resolve issues effectively and speedily without compromising system availability.The development and application of machine learning approaches for power system protection and state estimation have been facilitated by the availability of measurement data.This research proposes a transmission line fault detection and classification(FD&C)system based on an auto-encoder neural network.A comparison between a Multi-Layer Extreme Learning Machine(ML-ELM)network model and a Stacked Auto-Encoder neural network(SAE)is made.Additionally,the performance of the models developed is compared to that of state-of-the-art classifier models employing feature datasets acquired by wavelet transform based feature extraction as well as other deep learning models.With substantially shorter testing time,the suggested auto-encoder models detect faults with 100% accuracy and classify faults with 99.92% and 99.79%accuracy.The computational efficiency of the ML-ELM model is demonstrated with high accuracy of classification with training time and testing time less than 50 ms.To emulate real system scenarios the models are developed with datasets with noise with signal-to-noise-ratio(SNR)ranging from 10 dB to 40 dB.The efficacy of the models is demonstrated with data from the IEEE 39 bus test system.
基金supported by the basic science research program through the National Research Foundation of Korea(NRF)(2020R1F1A1073395)the basic research project of the Korea Institute of Geoscience and Mineral Resources(KIGAM)(GP2021-011,GP2020-031,21-3117)funded by the Ministry of Science and ICT,Korea。
文摘This paper presents an innovative data-integration that uses an iterative-learning method,a deep neural network(DNN)coupled with a stacked autoencoder(SAE)to solve issues encountered with many-objective history matching.The proposed method consists of a DNN-based inverse model with SAE-encoded static data and iterative updates of supervised-learning data are based on distance-based clustering schemes.DNN functions as an inverse model and results in encoded flattened data,while SAE,as a pre-trained neural network,successfully reduces dimensionality and reliably reconstructs geomodels.The iterative-learning method can improve the training data for DNN by showing the error reduction achieved with each iteration step.The proposed workflow shows the small mean absolute percentage error below 4%for all objective functions,while a typical multi-objective evolutionary algorithm fails to significantly reduce the initial population uncertainty.Iterative learning-based manyobjective history matching estimates the trends in water cuts that are not reliably included in dynamicdata matching.This confirms the proposed workflow constructs more plausible geo-models.The workflow would be a reliable alternative to overcome the less-convergent Pareto-based multi-objective evolutionary algorithm in the presence of geological uncertainty and varying objective functions.
基金supported by the National Natural Science Foundation of China (67441830108 and 41871224)。
文摘Spectral and spatial features in remotely sensed data play an irreplaceable role in classifying crop types for precision agriculture. Despite the thriving establishment of the handcrafted features, designing or selecting such features valid for specific crop types requires prior knowledge and thus remains an open challenge. Convolutional neural networks(CNNs) can effectively overcome this issue with their advanced ability to generate high-level features automatically but are still inadequate in mining spectral features compared to mining spatial features. This study proposed an enhanced spectral feature called Stacked Spectral Feature Space Patch(SSFSP) for CNN-based crop classification. SSFSP is a stack of twodimensional(2 D) gridded spectral feature images that record various crop types’ spatial and intensity distribution characteristics in a 2 D feature space consisting of two spectral bands. SSFSP can be input into2 D-CNNs to support the simultaneous mining of spectral and spatial features, as the spectral features are successfully converted to 2 D images that can be processed by CNN. We tested the performance of SSFSP by using it as the input to seven CNN models and one multilayer perceptron model for crop type classification compared to using conventional spectral features as input. Using high spatial resolution hyperspectral datasets at three sites, the comparative study demonstrated that SSFSP outperforms conventional spectral features regarding classification accuracy, robustness, and training efficiency. The theoretical analysis summarizes three reasons for its excellent performance. First, SSFSP mines the spectral interrelationship with feature generality, which reduces the required number of training samples.Second, the intra-class variance can be largely reduced by grid partitioning. Third, SSFSP is a highly sparse feature, which reduces the dependence on the CNN model structure and enables early and fast convergence in model training. In conclusion, SSFSP has great potential for practical crop classification in precision agriculture.
基金the National Research Foundation of Korea(NRF)grant funded by the Korea government(MSIT)(No.2019R1F1A1059346)This work was supported by the 2020 Research Fund(Project No.1.180090.01)of UNIST(Ulsan National Institute of Science and Technology).
文摘An anomaly-based intrusion detection system(A-IDS)provides a critical aspect in a modern computing infrastructure since new types of attacks can be discovered.It prevalently utilizes several machine learning algorithms(ML)for detecting and classifying network traffic.To date,lots of algorithms have been proposed to improve the detection performance of A-IDS,either using individual or ensemble learners.In particular,ensemble learners have shown remarkable performance over individual learners in many applications,including in cybersecurity domain.However,most existing works still suffer from unsatisfactory results due to improper ensemble design.The aim of this study is to emphasize the effectiveness of stacking ensemble-based model for A-IDS,where deep learning(e.g.,deep neural network[DNN])is used as base learner model.The effectiveness of the proposed model and base DNN model are benchmarked empirically in terms of several performance metrics,i.e.,Matthew’s correlation coefficient,accuracy,and false alarm rate.The results indicate that the proposed model is superior to the base DNN model as well as other existing ML algorithms found in the literature.
基金Project supported by National High-Technology Research and De-velopment Program of China (Grant No .2003AA517020)
文摘The temperature models of anode and cathode of direct methanol fuel cell (DMFC) stack were established by using radial basis function (RBF) neural networks identification technique to deal with the modeling and control problem of DMFC stack. An adaptive fuzzy neural networks temperature controller was designed based on the identification models established, and parameters of the controller were regulated by novel back propagation (BP) algorithm. Simulation results show that the RBF neural networks identification modeling method is correct, effective and the models established have good accuracy. Moreover, performance of the adaptive fuzzy neural networks temperature controller designed is superior.
基金supported in part by the National Natural Science Foundation of China(No.51606213)the National Major Science and Technology Projects(No.J2019-III-0010-0054)。
文摘Icing is an important factor threatening aircraft flight safety.According to the requirements of airworthiness regulations,aircraft icing safety assessment is needed to be carried out based on the ice shapes formed under different icing conditions.Due to the complexity of the icing process,the rapid assessment of ice shape remains an important challenge.In this paper,an efficient prediction model of aircraft icing is established based on the deep belief network(DBN)and the stacked auto-encoder(SAE),which are all deep neural networks.The detailed network structures are designed and then the networks are trained according to the samples obtained by the icing numerical computation.After that the model is applied on the ice shape evaluation of NACA0012 airfoil.The results show that the model can accurately capture the nonlinear behavior of aircraft icing and thus make an excellent ice shape prediction.The model provides an important tool for aircraft icing analysis.
基金This work is supported in part by the National Science Foundation of China(Grant Nos.61672392,61373038)in part by the National Key Research and Development Program of China(Grant No.2016YFC1202204).
文摘Software defect prediction plays an important role in software quality assurance.However,the performance of the prediction model is susceptible to the irrelevant and redundant features.In addition,previous studies mostly regard software defect prediction as a single objective optimization problem,and multi-objective software defect prediction has not been thoroughly investigated.For the above two reasons,we propose the following solutions in this paper:(1)we leverage an advanced deep neural network-Stacked Contractive AutoEncoder(SCAE)to extract the robust deep semantic features from the original defect features,which has stronger discrimination capacity for different classes(defective or non-defective).(2)we propose a novel multi-objective defect prediction model named SMONGE that utilizes the Multi-Objective NSGAII algorithm to optimize the advanced neural network-Extreme learning machine(ELM)based on state-of-the-art Pareto optimal solutions according to the features extracted by SCAE.We mainly consider two objectives.One objective is to maximize the performance of ELM,which refers to the benefit of the SMONGE model.Another objective is to minimize the output weight norm of ELM,which is related to the cost of the SMONGE model.We compare the SCAE with six state-of-the-art feature extraction methods and compare the SMONGE model with multiple baseline models that contain four classic defect predictors and the MONGE model without SCAE across 20 open source software projects.The experimental results verify that the superiority of SCAE and SMONGE on seven evaluation metrics.
基金The authors extend their appreciation to the Deputyship for Research&Innovation,Ministry of Education in Saudi Arabia,for funding this research work through Project Number 223202.
文摘Cardiac disease is a chronic condition that impairs the heart’s functionality.It includes conditions such as coronary artery disease,heart failure,arrhythmias,and valvular heart disease.These conditions can lead to serious complications and even be life-threatening if not detected and managed in time.Researchers have utilized Machine Learning(ML)and Deep Learning(DL)to identify heart abnormalities swiftly and consistently.Various approaches have been applied to predict and treat heart disease utilizing ML and DL.This paper proposes a Machine and Deep Learning-based Stacked Model(MDLSM)to predict heart disease accurately.ML approaches such as eXtreme Gradient Boosting(XGB),Random Forest(RF),Naive Bayes(NB),Decision Tree(DT),and KNearest Neighbor(KNN),along with two DL models:Deep Neural Network(DNN)and Fine Tuned Deep Neural Network(FT-DNN)are used to detect heart disease.These models rely on electronic medical data that increases the likelihood of correctly identifying and diagnosing heart disease.Well-known evaluation measures(i.e.,accuracy,precision,recall,F1-score,confusion matrix,and area under the Receiver Operating Characteristic(ROC)curve)are employed to check the efficacy of the proposed approach.Results reveal that the MDLSM achieves 94.14%prediction accuracy,which is 8.30%better than the results from the baseline experiments recommending our proposed approach for identifying and diagnosing heart disease.
基金The authors extend their appreciation to the Deanship of Scientific Research at King Saud University for funding this work through research Group No.RG-1441-379 and for their technical support.
文摘The exponential increase in new coronavirus disease 2019(COVID-19)cases and deaths has made COVID-19 the leading cause of death in many countries.Thus,in this study,we propose an efficient technique for the automatic detection of COVID-19 and pneumonia based on X-ray images.A stacked denoising convolutional autoencoder(SDCA)model was proposed to classify X-ray images into three classes:normal,pneumonia,and COVID-19.The SDCA model was used to obtain a good representation of the input data and extract the relevant features from noisy images.The proposed model’s architecture mainly composed of eight autoencoders,which were fed to two dense layers and SoftMax classifiers.The proposed model was evaluated with 6356 images from the datasets from different sources.The experiments and evaluation of the proposed model were applied to an 80/20 training/validation split and for five cross-validation data splitting,respectively.The metrics used for the SDCA model were the classification accuracy,precision,sensitivity,and specificity for both schemes.Our results demonstrated the superiority of the proposed model in classifying X-ray images with high accuracy of 96.8%.Therefore,this model can help physicians accelerate COVID-19 diagnosis.
文摘In recent years,computer visionfinds wide applications in maritime surveillance with its sophisticated algorithms and advanced architecture.Auto-matic ship detection with computer vision techniques provide an efficient means to monitor as well as track ships in water bodies.Waterways being an important medium of transport require continuous monitoring for protection of national security.The remote sensing satellite images of ships in harbours and water bodies are the image data that aid the neural network models to localize ships and to facilitate early identification of possible threats at sea.This paper proposes a deep learning based model capable enough to classify between ships and no-ships as well as to localize ships in the original images using bounding box tech-nique.Furthermore,classified ships are again segmented with deep learning based auto-encoder model.The proposed model,in terms of classification,provides suc-cessful results generating 99.5%and 99.2%validation and training accuracy respectively.The auto-encoder model also produces 85.1%and 84.2%validation and training accuracies.Moreover the IoU metric of the segmented images is found to be of 0.77 value.The experimental results reveal that the model is accu-rate and can be implemented for automatic ship detection in water bodies consid-ering remote sensing satellite images as input to the computer vision system.
文摘In network settings,one of the major disadvantages that threaten the network protocols is the insecurity.In most cases,unscrupulous people or bad actors can access information through unsecured connections by planting software or what we call malicious software otherwise anomalies.The presence of anomalies is also one of the disadvantages,internet users are constantly plagued by virus on their system and get activated when a harmless link is clicked on,this a case of true benign detected as false.Deep learning is very adept at dealing with such cases,but sometimes it has its own faults when dealing benign cases.Here we tend to adopt a dynamic control system(DCSYS)that addresses data packets based on benign scenario to truly report on false benign and exclude anomalies.Its performance is compared with artificial neural network auto-encoders to define its predictive power.Results show that though physical systems can adapt securely,it can be used for network data packets to identify true benign cases.
文摘The reduction of Hamiltonian systems aims to build smaller reduced models,valid over a certain range of time and parameters,in order to reduce computing time.By maintaining the Hamiltonian structure in the reduced model,certain longterm stability properties can be preserved.In this paper,we propose a non-linear reduction method for models coming from the spatial discretization of partial differential equations:it is based on convolutional auto-encoders and Hamiltonian neural networks.Their training is coupled in order to learn the encoder-decoder operators and the reduced dynamics simultaneously.Several test cases on non-linear wave dynamics show that the method has better reduction properties than standard linear Hamiltonian reduction methods.
文摘为了解决单个神经网络预测的局限性和时间序列的波动性,提出了一种奇异谱分析(singular spectrum analysis,SSA)和Stacking框架相结合的短期负荷预测方法。利用随机森林筛选出与历史负荷相关性强烈的特征因素,采用SSA为负荷数据降噪,简化模型计算过程;基于Stacking框架,结合长短期记忆(long and short-term memory,LSTM)-自注意力机制(self-attention mechanism,SA)、径向基(radial base functions,RBF)神经网络和线性回归方法集成新的组合模型,同时利用交叉验证方法避免模型过拟合;选取PJM和澳大利亚电力负荷数据集进行验证。仿真结果表明,与其他模型比较,所提模型预测精度高。