The aerial deployment method enables Unmanned Aerial Vehicles(UAVs)to be directly positioned at the required altitude for their mission.This method typically employs folding technology to improve loading efficiency,wi...The aerial deployment method enables Unmanned Aerial Vehicles(UAVs)to be directly positioned at the required altitude for their mission.This method typically employs folding technology to improve loading efficiency,with applications such as the gravity-only aerial deployment of high-aspect-ratio solar-powered UAVs,and aerial takeoff of fixed-wing drones in Mars research.However,the significant morphological changes during deployment are accompanied by strong nonlinear dynamic aerodynamic forces,which result in multiple degrees of freedom and an unstable character.This hinders the description and analysis of unknown dynamic behaviors,further leading to difficulties in the design of deployment strategies and flight control.To address this issue,this paper proposes an analysis method for dynamic behaviors during aerial deployment based on the Variational Autoencoder(VAE).Focusing on the gravity-only deployment problem of highaspect-ratio foldable-wing UAVs,the method encodes the multi-degree-of-freedom unstable motion signals into a low-dimensional feature space through a data-driven approach.By clustering in the feature space,this paper identifies and studies several dynamic behaviors during aerial deployment.The research presented in this paper offers a new method and perspective for feature extraction and analysis of complex and difficult-to-describe extreme flight dynamics,guiding the research on aerial deployment drones design and control strategies.展开更多
To predict the lithium-ion(Li-ion)battery degradation trajectory in the early phase,arranging the maintenance of battery energy storage systems is of great importance.However,under different operation conditions,Li-io...To predict the lithium-ion(Li-ion)battery degradation trajectory in the early phase,arranging the maintenance of battery energy storage systems is of great importance.However,under different operation conditions,Li-ion batteries present distinct degradation patterns,and it is challenging to capture negligible capacity fade in early cycles.Despite the data-driven method showing promising performance,insufficient data is still a big issue since the ageing experiments on the batteries are too slow and expensive.In this study,we proposed twin autoencoders integrated into a two-stage method to predict the early cycles'degradation trajectories.The two-stage method can properly predict the degradation from course to fine.The twin autoencoders serve as a feature extractor and a synthetic data generator,respectively.Ultimately,a learning procedure based on the long-short term memory(LSTM)network is designed to hybridize the learning process between the real and synthetic data.The performance of the proposed method is verified on three datasets,and the experimental results show that the proposed method can achieve accurate predictions compared to its competitors.展开更多
To enhance the accuracy and efficiency of bridge damage identification,a novel data-driven damage identification method was proposed.First,convolutional autoencoder(CAE)was used to extract key features from the accele...To enhance the accuracy and efficiency of bridge damage identification,a novel data-driven damage identification method was proposed.First,convolutional autoencoder(CAE)was used to extract key features from the acceleration signal of the bridge structure through data reconstruction.The extreme gradient boosting tree(XGBoost)was then used to perform analysis on the feature data to achieve damage detection with high accuracy and high performance.The proposed method was applied in a numerical simulation study on a three-span continuous girder and further validated experimentally on a scaled model of a cable-stayed bridge.The numerical simulation results show that the identification errors remain within 2.9%for six single-damage cases and within 3.1%for four double-damage cases.The experimental validation results demonstrate that when the tension in a single cable of the cable-stayed bridge decreases by 20%,the method accurately identifies damage at different cable locations using only sensors installed on the main girder,achieving identification accuracies above 95.8%in all cases.The proposed method shows high identification accuracy and generalization ability across various damage scenarios.展开更多
Significant advancements have beenwitnessed in visual tracking applications leveragingViT in recent years,mainly due to the formidablemodeling capabilities of Vision Transformer(ViT).However,the strong performance of ...Significant advancements have beenwitnessed in visual tracking applications leveragingViT in recent years,mainly due to the formidablemodeling capabilities of Vision Transformer(ViT).However,the strong performance of such trackers heavily relies on ViT models pretrained for long periods,limitingmore flexible model designs for tracking tasks.To address this issue,we propose an efficient unsupervised ViT pretraining method for the tracking task based on masked autoencoders,called TrackMAE.During pretraining,we employ two shared-parameter ViTs,serving as the appearance encoder and motion encoder,respectively.The appearance encoder encodes randomly masked image data,while the motion encoder encodes randomly masked pairs of video frames.Subsequently,an appearance decoder and a motion decoder separately reconstruct the original image data and video frame data at the pixel level.In this way,ViT learns to understand both the appearance of images and the motion between video frames simultaneously.Experimental results demonstrate that ViT-Base and ViT-Large models,pretrained with TrackMAE and combined with a simple tracking head,achieve state-of-the-art(SOTA)performance without additional design.Moreover,compared to the currently popular MAE pretraining methods,TrackMAE consumes only 1/5 of the training time,which will facilitate the customization of diverse models for tracking.For instance,we additionally customize a lightweight ViT-XS,which achieves SOTA efficient tracking performance.展开更多
Wayside monitoring is a promising cost-effective alternative to predict damage in the rolling stock. The main goal of this work is to present an unsupervised methodology to identify out-of-roundness(OOR) damage wheels...Wayside monitoring is a promising cost-effective alternative to predict damage in the rolling stock. The main goal of this work is to present an unsupervised methodology to identify out-of-roundness(OOR) damage wheels, such as wheel flats and polygonal wheels. This automatic damage identification algorithm is based on the vertical acceleration evaluated on the rails using a virtual wayside monitoring system and involves the application of a two-step procedure. The first step aims to define a confidence boundary by using(healthy) measurements evaluated on the rail constituting a baseline. The second step of the procedure involves classifying damage of predefined scenarios with different levels of severities. The proposed procedure is based on a machine learning methodology and includes the following stages:(1) data collection,(2) damage-sensitive feature extraction from the acquired responses using a neural network model, i.e., the sparse autoencoder(SAE),(3) data fusion based on the Mahalanobis distance, and(4) unsupervised feature classification by implementing outlier and cluster analysis. This procedure considers baseline responses at different speeds and rail irregularities to train the SAE model. Then, the trained SAE is capable to reconstruct test responses(not trained) allowing to compute the accumulative difference between original and reconstructed signals. The results prove the efficiency of the proposed approach in identifying the two most common types of OOR in railway wheels.展开更多
Network embedding(NE)tries to learn the potential properties of complex networks represented in a low-dimensional feature space.However,the existing deep learningbased NE methods are time-consuming as they need to tra...Network embedding(NE)tries to learn the potential properties of complex networks represented in a low-dimensional feature space.However,the existing deep learningbased NE methods are time-consuming as they need to train a dense architecture for deep neural networks with extensive unknown weight parameters.A sparse deep autoencoder(called SPDNE)for dynamic NE is proposed,aiming to learn the network structures while preserving the node evolution with a low computational complexity.SPDNE tries to use an optimal sparse architecture to replace the fully connected architecture in the deep autoencoder while maintaining the performance of these models in the dynamic NE.Then,an adaptive simulated algorithm to find the optimal sparse architecture for the deep autoencoder is proposed.The performance of SPDNE over three dynamical NE models(i.e.sparse architecture-based deep autoencoder method,DynGEM,and ElvDNE)is evaluated on three well-known benchmark networks and five real-world networks.The experimental results demonstrate that SPDNE can reduce about 70%of weight parameters of the architecture for the deep autoencoder during the training process while preserving the performance of these dynamical NE models.The results also show that SPDNE achieves the highest accuracy on 72 out of 96 edge prediction and network reconstruction tasks compared with the state-of-the-art dynamical NE algorithms.展开更多
We study the effects of quantization and additive white Gaussian noise(AWGN) in transmitting latent representations of images over a noisy communication channel. The latent representations are obtained using autoencod...We study the effects of quantization and additive white Gaussian noise(AWGN) in transmitting latent representations of images over a noisy communication channel. The latent representations are obtained using autoencoders(AEs). We analyze image reconstruction and classification performance for different channel noise powers, latent vector sizes, and number of quantization bits used for the latent variables as well as AEs’ parameters. The results show that the digital transmission of latent representations using conventional AEs alone is extremely vulnerable to channel noise and quantization effects. We then propose a combination of basic AE and a denoising autoencoder(DAE) to denoise the corrupted latent vectors at the receiver. This approach demonstrates robustness against channel noise and quantization effects and enables a significant improvement in image reconstruction and classification performance particularly in adverse scenarios with high noise powers and significant quantization effects.展开更多
Wireless sensor networks are increasingly used in sensitive event monitoring.However,various abnormal data generated by sensors greatly decrease the accuracy of the event detection.Although many methods have been prop...Wireless sensor networks are increasingly used in sensitive event monitoring.However,various abnormal data generated by sensors greatly decrease the accuracy of the event detection.Although many methods have been proposed to deal with the abnormal data,they generally detect and/or repair all abnormal data without further differentiate.Actually,besides the abnormal data caused by events,it is well known that sensor nodes prone to generate abnormal data due to factors such as sensor hardware drawbacks and random effects of external sources.Dealing with all abnormal data without differentiate will result in false detection or missed detection of the events.In this paper,we propose a data cleaning approach based on Stacked Denoising Autoencoders(SDAE)and multi-sensor collaborations.We detect all abnormal data by SDAE,then differentiate the abnormal data by multi-sensor collaborations.The abnormal data caused by events are unchanged,while the abnormal data caused by other factors are repaired.Real data based simulations show the efficiency of the proposed approach.展开更多
Supervised machine learning algorithms have been widely used in seismic exploration processing,but the lack of labeled examples complicates its application.Therefore,we propose a seismic labeled data expansion method ...Supervised machine learning algorithms have been widely used in seismic exploration processing,but the lack of labeled examples complicates its application.Therefore,we propose a seismic labeled data expansion method based on deep variational Autoencoders(VAE),which are made of neural networks and contains two partsEncoder and Decoder.Lack of training samples leads to overfitting of the network.We training the VAE with whole seismic data,which is a data-driven process and greatly alleviates the risk of overfitting.The Encoder captures the ability to map the seismic waveform Y to latent deep features z,and the Decoder captures the ability to reconstruct high-dimensional waveform Yb from latent deep features z.Later,we put the labeled seismic data into Encoders and get the latent deep features.We can easily use gaussian mixture model to fit the deep feature distribution of each class labeled data.We resample a mass of expansion deep features z* according to the Gaussian mixture model,and put the expansion deep features into the decoder to generate expansion seismic data.The experiments in synthetic and real data show that our method alleviates the problem of lacking labeled seismic data for supervised seismic facies analysis.展开更多
A pathological complete response to neoadjuvant chemoradiotherapy offers patients with rectal cancer that has advanced locally the highest chance of survival.However,there is not yet a valid prediction model available...A pathological complete response to neoadjuvant chemoradiotherapy offers patients with rectal cancer that has advanced locally the highest chance of survival.However,there is not yet a valid prediction model available.An efficient feature extraction technique is also required to increase a prediction model’s precision.CDAS(cancer data access system)program is a great place to look for cancer along with images or biospecimens.In this study,we look at data from the CDAS system,specifically bowel cancer(colorectal cancer)datasets.This study suggested a survival prediction method for rectal cancer.In addition,this determines which deep learning algorithm works best by comparing their performance in terms of prediction accuracy.The initial job that leads to correct findings is corpus cleansing.Moving forward,the data preprocessing activity will be performed,which will comprise“exploratory data analysis and pruning and normalization or experimental study of data,which is required to obtain data features to design the model for cancer detection at an early stage.”Aside from that,the data corpus is separated into two sub-corpora:training data and test data,which will be utilized to assess the correctness of the constructed model.This study will compare our autoencoder accuracy to that of other deep learning algorithms,such as artificial neural network,convolutional neural network,and restricted Boltzmann machine,before implementing the suggested methodology and displaying the model’s accuracy graphically after the suggested new methodology or algorithm for patients with rectal cancer.Various criteria,including true positive rate,receiver operating characteristic(ROC)curve,and accuracy scores,are used in the experiments to determine the model’s high accuracy.In the end,we determine the accuracy score for each model.The outcomes of the simulation demonstrated that rectal cancer patients may be estimated using prediction models.It is shown that variational deep encoders have excellent accuracy of 94%in this cancer prediction and 95%for ROC curve regions.The findings demonstrate that automated prediction algorithms are capable of properly estimating rectal cancer patients’chances of survival.The best results,with 95%accuracy,were generated by deep autoencoders.展开更多
Anomaly detection(AD)is an important task in a broad range of domains.A popular choice for AD are Deep Support Vector Data Description models.When learning such models,normal data is mapped close to and anomalous data...Anomaly detection(AD)is an important task in a broad range of domains.A popular choice for AD are Deep Support Vector Data Description models.When learning such models,normal data is mapped close to and anomalous data is mapped far from a center,in some latent space,enabling the construction of a sphere to separate both types of data.Empirically,it was observed:(i)that the center and radius of such sphere largely depend on the training data and model initialization which leads to difficulties when selecting a threshold,and(ii)that the center and radius of this sphere strongly impact the model AD performance on unseen data.In this work,a more robust AD solution is proposed that(i)defines a sphere with a fixed radius and margin in some latent space and(ii)enforces the encoder,which maps the input to a latent space,to encode the normal data in a small sphere and the anomalous data outside a larger sphere,with the same center.Experimental results indicate that the proposed algorithm attains higher performance compared to alternatives,and that the difference in size of the two spheres has a minor impact on the performance.展开更多
Visual motion segmentation(VMS)is an important and key part of many intelligent crowd systems.It can be used to figure out the flow behavior through a crowd and to spot unusual life-threatening incidents like crowd st...Visual motion segmentation(VMS)is an important and key part of many intelligent crowd systems.It can be used to figure out the flow behavior through a crowd and to spot unusual life-threatening incidents like crowd stampedes and crashes,which pose a serious risk to public safety and have resulted in numerous fatalities over the past few decades.Trajectory clustering has become one of the most popular methods in VMS.However,complex data,such as a large number of samples and parameters,makes it difficult for trajectory clustering to work well with accurate motion segmentation results.This study introduces a spatial-angular stacked sparse autoencoder model(SA-SSAE)with l2-regularization and softmax,a powerful deep learning method for visual motion segmentation to cluster similar motion patterns that belong to the same cluster.The proposed model can extract meaningful high-level features using only spatial-angular features obtained from refined tracklets(a.k.a‘trajectories’).We adopt l2-regularization and sparsity regularization,which can learn sparse representations of features,to guarantee the sparsity of the autoencoders.We employ the softmax layer to map the data points into accurate cluster representations.One of the best advantages of the SA-SSAE framework is it can manage VMS even when individuals move around randomly.This framework helps cluster the motion patterns effectively with higher accuracy.We put forward a new dataset with itsmanual ground truth,including 21 crowd videos.Experiments conducted on two crowd benchmarks demonstrate that the proposed model can more accurately group trajectories than the traditional clustering approaches used in previous studies.The proposed SA-SSAE framework achieved a 0.11 improvement in accuracy and a 0.13 improvement in the F-measure compared with the best current method using the CUHK dataset.展开更多
Fault diagnosis of electric motors is a fundamental task for production line testing, and it is usually performed by experienced human operators. In the recent years, several methods have been proposed in the literatu...Fault diagnosis of electric motors is a fundamental task for production line testing, and it is usually performed by experienced human operators. In the recent years, several methods have been proposed in the literature for detecting faults automatically. Deep neural networks have been successfully employed for this task, but, up to the authors' knowledge, they have never been used in an unsupervised scenario. This paper proposes an unsupervised method for diagnosing faults of electric motors by using a novelty detection approach based on deep autoencoders. In the proposed method, vibration signals are acquired by using accelerometers and processed to extract LogMel coefficients as features. Autoencoders are trained by using normal data only, i.e., data that do not contain faults. Three different autoencoders architectures have been evaluated: the multilayer perceptron(MLP) autoencoder, the convolutional neural network autoencoder, and the recurrent autoencoder composed of long short-term memory(LSTM) units. The experiments have been conducted by using a dataset created by the authors, and the proposed approaches have been compared to the one-class support vector machine(OC-SVM) algorithm. The performance has been evaluated in terms area under curve(AUC) of the receiver operating characteristic curve, and the results showed that all the autoencoder-based approaches outperform the OCSVM algorithm. Moreover, the MLP autoencoder is the most performing architecture, achieving an AUC equal to 99.11 %.展开更多
Many existing aircraft engine fault detection methods are highly dependent on performance deviation data that are provided by the original equipment manufacturer. To improve the independent engine fault detection abil...Many existing aircraft engine fault detection methods are highly dependent on performance deviation data that are provided by the original equipment manufacturer. To improve the independent engine fault detection ability, Aircraft Communications Addressing and Reporting System(ACARS) data can be used. However, owing to the characteristics of high dimension, complex correlations between parameters, and large noise content, it is difficult for existing methods to detect faults effectively by using ACARS data. To solve this problem, a novel engine fault detection method based on original ACARS data is proposed. First, inspired by computer vision methods, all variables were divided into separated groups according to their correlations. Then, an improved convolutional denoising autoencoder was used to extract the features of each group. Finally, all of the extracted features were fused to form feature vectors. Thereby, fault samples could be identified based on these feature vectors. Experiments were conducted to validate the effectiveness and efficiency of our method and other competing methods by considering real ACARS data as the data source. The results reveal the good performance of our method with regard to comprehensive fault detection and robustness. Additionally, the computational and time costs of our method are shown to be relatively low.展开更多
In this study,the hourly directions of eight banking stocks in Borsa Istanbul were predicted using linear-based,deep-learning(LSTM)and ensemble learning(Light-GBM)models.These models were trained with four different f...In this study,the hourly directions of eight banking stocks in Borsa Istanbul were predicted using linear-based,deep-learning(LSTM)and ensemble learning(Light-GBM)models.These models were trained with four different feature sets and their performances were evaluated in terms of accuracy and F-measure metrics.While the first experiments directly used the own stock features as the model inputs,the second experiments utilized reduced stock features through Variational AutoEncoders(VAE).In the last experiments,in order to grasp the effects of the other banking stocks on individual stock performance,the features belonging to other stocks were also given as inputs to our models.While combining other stock features was done for both own(named as allstock_own)and VAE-reduced(named as allstock_VAE)stock features,the expanded dimensions of the feature sets were reduced by Recursive Feature Elimination.As the highest success rate increased up to 0.685 with allstock_own and LSTM with attention model,the combination of allstock_VAE and LSTM with the attention model obtained an accuracy rate of 0.675.Although the classification results achieved with both feature types was close,allstock_VAE achieved these results using nearly 16.67%less features compared to allstock_own.When all experimental results were examined,it was found out that the models trained with allstock_own and allstock_VAE achieved higher accuracy rates than those using individual stock features.It was also concluded that the results obtained with the VAE-reduced stock features were similar to those obtained by own stock features.展开更多
Objective and accurate evaluation of rock mass quality classification is the prerequisite for reliable sta-bility assessment.To develop a tool that can deliver quick and accurate evaluation of rock mass quality,a deep...Objective and accurate evaluation of rock mass quality classification is the prerequisite for reliable sta-bility assessment.To develop a tool that can deliver quick and accurate evaluation of rock mass quality,a deep learning approach is developed,which uses stacked autoencoders(SAEs)with several autoencoders and a softmax net layer.Ten rock parameters of rock mass rating(RMR)system are calibrated in this model.The model is trained using 75%of the total database for training sample data.The SAEs trained model achieves a nearly 100%prediction accuracy.For comparison,other different models are also trained with the same dataset,using artificial neural network(ANN)and radial basis function(RBF).The results show that the SAEs classify all test samples correctly while the rating accuracies of ANN and RBF are 97.5%and 98.7%,repectively,which are calculated from the confusion matrix.Moreover,this model is further employed to predict the slope risk level of an abandoned quarry.The proposed approach using SAEs,or deep learning in general,is more objective and more accurate and requires less human inter-vention.The findings presented here shall shed light for engineers/researchers interested in analyzing rock mass classification criteria or performing field investigation.展开更多
The widespread usage of Cyber Physical Systems(CPSs)generates a vast volume of time series data,and precisely determining anomalies in the data is critical for practical production.Autoencoder is the mainstream method...The widespread usage of Cyber Physical Systems(CPSs)generates a vast volume of time series data,and precisely determining anomalies in the data is critical for practical production.Autoencoder is the mainstream method for time series anomaly detection,and the anomaly is judged by reconstruction error.However,due to the strong generalization ability of neural networks,some abnormal samples close to normal samples may be judged as normal,which fails to detect the abnormality.In addition,the dataset rarely provides sufficient anomaly labels.This research proposes an unsupervised anomaly detection approach based on adversarial memory autoencoders for multivariate time series to solve the above problem.Firstly,an encoder encodes the input data into low-dimensional space to acquire a feature vector.Then,a memory module is used to learn the feature vector’s prototype patterns and update the feature vectors.The updating process allows partial forgetting of information to prevent model overgeneralization.After that,two decoders reconstruct the input data.Finally,this research uses the Peak Over Threshold(POT)method to calculate the threshold to determine anomalous samples from normal samples.This research uses a two-stage adversarial training strategy during model training to enlarge the gap between the reconstruction error of normal and abnormal samples.The proposed method achieves significant anomaly detection results on synthetic and real datasets from power systems,water treatment plants,and computer clusters.The F1 score reached an average of 0.9196 on the five datasets,which is 0.0769 higher than the best baseline method.展开更多
Recently,the autoencoder(AE)based method plays a critical role in the hyperspectral anomaly detection domain.However,due to the strong generalised capacity of AE,the abnormal samples are usually reconstructed well alo...Recently,the autoencoder(AE)based method plays a critical role in the hyperspectral anomaly detection domain.However,due to the strong generalised capacity of AE,the abnormal samples are usually reconstructed well along with the normal background samples.Thus,in order to separate anomalies from the background by calculating reconstruction errors,it can be greatly beneficial to reduce the AE capability for abnormal sample reconstruction while maintaining the background reconstruction performance.A memory‐augmented autoencoder for hyperspectral anomaly detection(MAENet)is proposed to address this challenging problem.Specifically,the proposed MAENet mainly consists of an encoder,a memory module,and a decoder.First,the encoder transforms the original hyperspectral data into the low‐dimensional latent representation.Then,the latent representation is utilised to retrieve the most relevant matrix items in the memory matrix,and the retrieved matrix items will be used to replace the latent representation from the encoder.Finally,the decoder is used to reconstruct the input hyperspectral data using the retrieved memory items.With this strategy,the background can still be reconstructed well while the abnormal samples cannot.Experiments conducted on five real hyperspectral anomaly data sets demonstrate the superiority of the proposed method.展开更多
Invoice document digitization is crucial for efficient management in industries.The scanned invoice image is often noisy due to various reasons.This affects the OCR(optical character recognition)detection accuracy.In ...Invoice document digitization is crucial for efficient management in industries.The scanned invoice image is often noisy due to various reasons.This affects the OCR(optical character recognition)detection accuracy.In this paper,letter data obtained from images of invoices are denoised using a modified autoencoder based deep learning method.A stacked denoising autoencoder(SDAE)is implemented with two hidden layers each in encoder network and decoder network.In order to capture the most salient features of training samples,a undercomplete autoencoder is designed with non-linear encoder and decoder function.This autoencoder is regularized for denoising application using a combined loss function which considers both mean square error and binary cross entropy.A dataset consisting of 59,119 letter images,which contains both English alphabets(upper and lower case)and numbers(0 to 9)is prepared from many scanned invoices images and windows true type(.ttf)files,are used for training the neural network.Performance is analyzed in terms of Signal to Noise Ratio(SNR),Peak Signal to Noise Ratio(PSNR),Structural Similarity Index(SSIM)and Universal Image Quality Index(UQI)and compared with other filtering techniques like Nonlocal Means filter,Anisotropic diffusion filter,Gaussian filters and Mean filters.Denoising performance of proposed SDAE is compared with existing SDAE with single loss function in terms of SNR and PSNR values.Results show the superior performance of proposed SDAE method.展开更多
Contemporary attackers,mainly motivated by financial gain,consistently devise sophisticated penetration techniques to access important information or data.The growing use of Internet of Things(IoT)technology in the co...Contemporary attackers,mainly motivated by financial gain,consistently devise sophisticated penetration techniques to access important information or data.The growing use of Internet of Things(IoT)technology in the contemporary convergence environment to connect to corporate networks and cloud-based applications only worsens this situation,as it facilitates multiple new attack vectors to emerge effortlessly.As such,existing intrusion detection systems suffer from performance degradation mainly because of insufficient considerations and poorly modeled detection systems.To address this problem,we designed a blended threat detection approach,considering the possible impact and dimensionality of new attack surfaces due to the aforementioned convergence.We collectively refer to the convergence of different technology sectors as the internet of blended environment.The proposed approach encompasses an ensemble of heterogeneous probabilistic autoencoders that leverage the corresponding advantages of a convolutional variational autoencoder and long short-term memory variational autoencoder.An extensive experimental analysis conducted on the TON_IoT dataset demonstrated 96.02%detection accuracy.Furthermore,performance of the proposed approach was compared with various single model(autoencoder)-based network intrusion detection approaches:autoencoder,variational autoencoder,convolutional variational autoencoder,and long short-term memory variational autoencoder.The proposed model outperformed all compared models,demonstrating F1-score improvements of 4.99%,2.25%,1.92%,and 3.69%,respectively.展开更多
基金co-supported by the Natural Science Basic Research Program of Shaanxi,China(No.2023-JC-QN-0043)the ND Basic Research Funds,China(No.G2022WD).
文摘The aerial deployment method enables Unmanned Aerial Vehicles(UAVs)to be directly positioned at the required altitude for their mission.This method typically employs folding technology to improve loading efficiency,with applications such as the gravity-only aerial deployment of high-aspect-ratio solar-powered UAVs,and aerial takeoff of fixed-wing drones in Mars research.However,the significant morphological changes during deployment are accompanied by strong nonlinear dynamic aerodynamic forces,which result in multiple degrees of freedom and an unstable character.This hinders the description and analysis of unknown dynamic behaviors,further leading to difficulties in the design of deployment strategies and flight control.To address this issue,this paper proposes an analysis method for dynamic behaviors during aerial deployment based on the Variational Autoencoder(VAE).Focusing on the gravity-only deployment problem of highaspect-ratio foldable-wing UAVs,the method encodes the multi-degree-of-freedom unstable motion signals into a low-dimensional feature space through a data-driven approach.By clustering in the feature space,this paper identifies and studies several dynamic behaviors during aerial deployment.The research presented in this paper offers a new method and perspective for feature extraction and analysis of complex and difficult-to-describe extreme flight dynamics,guiding the research on aerial deployment drones design and control strategies.
基金financially supported by the National Natural Science Foundation of China under Grant 62372369,52107229,62272383the Key Research and Development Program of Shaanxi Province(2024GX-YBXM-442)Natural Science Basic Research Program of Shaanxi Province(2024JC-YBMS-477)。
文摘To predict the lithium-ion(Li-ion)battery degradation trajectory in the early phase,arranging the maintenance of battery energy storage systems is of great importance.However,under different operation conditions,Li-ion batteries present distinct degradation patterns,and it is challenging to capture negligible capacity fade in early cycles.Despite the data-driven method showing promising performance,insufficient data is still a big issue since the ageing experiments on the batteries are too slow and expensive.In this study,we proposed twin autoencoders integrated into a two-stage method to predict the early cycles'degradation trajectories.The two-stage method can properly predict the degradation from course to fine.The twin autoencoders serve as a feature extractor and a synthetic data generator,respectively.Ultimately,a learning procedure based on the long-short term memory(LSTM)network is designed to hybridize the learning process between the real and synthetic data.The performance of the proposed method is verified on three datasets,and the experimental results show that the proposed method can achieve accurate predictions compared to its competitors.
基金The National Natural Science Foundation of China(No.52361165658,52378318,52078459).
文摘To enhance the accuracy and efficiency of bridge damage identification,a novel data-driven damage identification method was proposed.First,convolutional autoencoder(CAE)was used to extract key features from the acceleration signal of the bridge structure through data reconstruction.The extreme gradient boosting tree(XGBoost)was then used to perform analysis on the feature data to achieve damage detection with high accuracy and high performance.The proposed method was applied in a numerical simulation study on a three-span continuous girder and further validated experimentally on a scaled model of a cable-stayed bridge.The numerical simulation results show that the identification errors remain within 2.9%for six single-damage cases and within 3.1%for four double-damage cases.The experimental validation results demonstrate that when the tension in a single cable of the cable-stayed bridge decreases by 20%,the method accurately identifies damage at different cable locations using only sensors installed on the main girder,achieving identification accuracies above 95.8%in all cases.The proposed method shows high identification accuracy and generalization ability across various damage scenarios.
基金supported in part by National Natural Science Foundation of China(No.62176041)in part by Excellent Science and Technique Talent Foundation of Dalian(No.2022RY21).
文摘Significant advancements have beenwitnessed in visual tracking applications leveragingViT in recent years,mainly due to the formidablemodeling capabilities of Vision Transformer(ViT).However,the strong performance of such trackers heavily relies on ViT models pretrained for long periods,limitingmore flexible model designs for tracking tasks.To address this issue,we propose an efficient unsupervised ViT pretraining method for the tracking task based on masked autoencoders,called TrackMAE.During pretraining,we employ two shared-parameter ViTs,serving as the appearance encoder and motion encoder,respectively.The appearance encoder encodes randomly masked image data,while the motion encoder encodes randomly masked pairs of video frames.Subsequently,an appearance decoder and a motion decoder separately reconstruct the original image data and video frame data at the pixel level.In this way,ViT learns to understand both the appearance of images and the motion between video frames simultaneously.Experimental results demonstrate that ViT-Base and ViT-Large models,pretrained with TrackMAE and combined with a simple tracking head,achieve state-of-the-art(SOTA)performance without additional design.Moreover,compared to the currently popular MAE pretraining methods,TrackMAE consumes only 1/5 of the training time,which will facilitate the customization of diverse models for tracking.For instance,we additionally customize a lightweight ViT-XS,which achieves SOTA efficient tracking performance.
基金a result of project WAY4SafeRail—Wayside monitoring system FOR SAFE RAIL transportation, with reference NORTE-01-0247-FEDER-069595co-funded by the European Regional Development Fund (ERDF), through the North Portugal Regional Operational Programme (NORTE2020), under the PORTUGAL 2020 Partnership Agreement+3 种基金financially supported by Base Funding-UIDB/04708/2020Programmatic Funding-UIDP/04708/2020 of the CONSTRUCT—Instituto de Estruturas e Constru??esfunded by national funds through the FCT/ MCTES (PIDDAC)Grant No. 2021.04272. CEECIND from the Stimulus of Scientific Employment, Individual Support (CEECIND) - 4th Edition provided by “FCT – Funda??o para a Ciência, DOI : https:// doi. org/ 10. 54499/ 2021. 04272. CEECI ND/ CP1679/ CT0003”。
文摘Wayside monitoring is a promising cost-effective alternative to predict damage in the rolling stock. The main goal of this work is to present an unsupervised methodology to identify out-of-roundness(OOR) damage wheels, such as wheel flats and polygonal wheels. This automatic damage identification algorithm is based on the vertical acceleration evaluated on the rails using a virtual wayside monitoring system and involves the application of a two-step procedure. The first step aims to define a confidence boundary by using(healthy) measurements evaluated on the rail constituting a baseline. The second step of the procedure involves classifying damage of predefined scenarios with different levels of severities. The proposed procedure is based on a machine learning methodology and includes the following stages:(1) data collection,(2) damage-sensitive feature extraction from the acquired responses using a neural network model, i.e., the sparse autoencoder(SAE),(3) data fusion based on the Mahalanobis distance, and(4) unsupervised feature classification by implementing outlier and cluster analysis. This procedure considers baseline responses at different speeds and rail irregularities to train the SAE model. Then, the trained SAE is capable to reconstruct test responses(not trained) allowing to compute the accumulative difference between original and reconstructed signals. The results prove the efficiency of the proposed approach in identifying the two most common types of OOR in railway wheels.
基金National Natural Science Foundation of China,Grant/Award Numbers:62173236,61876110,61806130,61976142,82304204.
文摘Network embedding(NE)tries to learn the potential properties of complex networks represented in a low-dimensional feature space.However,the existing deep learningbased NE methods are time-consuming as they need to train a dense architecture for deep neural networks with extensive unknown weight parameters.A sparse deep autoencoder(called SPDNE)for dynamic NE is proposed,aiming to learn the network structures while preserving the node evolution with a low computational complexity.SPDNE tries to use an optimal sparse architecture to replace the fully connected architecture in the deep autoencoder while maintaining the performance of these models in the dynamic NE.Then,an adaptive simulated algorithm to find the optimal sparse architecture for the deep autoencoder is proposed.The performance of SPDNE over three dynamical NE models(i.e.sparse architecture-based deep autoencoder method,DynGEM,and ElvDNE)is evaluated on three well-known benchmark networks and five real-world networks.The experimental results demonstrate that SPDNE can reduce about 70%of weight parameters of the architecture for the deep autoencoder during the training process while preserving the performance of these dynamical NE models.The results also show that SPDNE achieves the highest accuracy on 72 out of 96 edge prediction and network reconstruction tasks compared with the state-of-the-art dynamical NE algorithms.
基金supported by Hong Kong Government general research fund (GRF) under project number PolyU152757/16ENational Natural Science Foundation China under project numbers 61435006 and 61401020
文摘We study the effects of quantization and additive white Gaussian noise(AWGN) in transmitting latent representations of images over a noisy communication channel. The latent representations are obtained using autoencoders(AEs). We analyze image reconstruction and classification performance for different channel noise powers, latent vector sizes, and number of quantization bits used for the latent variables as well as AEs’ parameters. The results show that the digital transmission of latent representations using conventional AEs alone is extremely vulnerable to channel noise and quantization effects. We then propose a combination of basic AE and a denoising autoencoder(DAE) to denoise the corrupted latent vectors at the receiver. This approach demonstrates robustness against channel noise and quantization effects and enables a significant improvement in image reconstruction and classification performance particularly in adverse scenarios with high noise powers and significant quantization effects.
基金This work is supported by the National Natural Science Foundation of China(Grant No.61672282)the Basic Research Program of Jiangsu Province(Grant No.BK20161491).
文摘Wireless sensor networks are increasingly used in sensitive event monitoring.However,various abnormal data generated by sensors greatly decrease the accuracy of the event detection.Although many methods have been proposed to deal with the abnormal data,they generally detect and/or repair all abnormal data without further differentiate.Actually,besides the abnormal data caused by events,it is well known that sensor nodes prone to generate abnormal data due to factors such as sensor hardware drawbacks and random effects of external sources.Dealing with all abnormal data without differentiate will result in false detection or missed detection of the events.In this paper,we propose a data cleaning approach based on Stacked Denoising Autoencoders(SDAE)and multi-sensor collaborations.We detect all abnormal data by SDAE,then differentiate the abnormal data by multi-sensor collaborations.The abnormal data caused by events are unchanged,while the abnormal data caused by other factors are repaired.Real data based simulations show the efficiency of the proposed approach.
基金Supported by National Natural Science Foundation of China(41804126,41604107).
文摘Supervised machine learning algorithms have been widely used in seismic exploration processing,but the lack of labeled examples complicates its application.Therefore,we propose a seismic labeled data expansion method based on deep variational Autoencoders(VAE),which are made of neural networks and contains two partsEncoder and Decoder.Lack of training samples leads to overfitting of the network.We training the VAE with whole seismic data,which is a data-driven process and greatly alleviates the risk of overfitting.The Encoder captures the ability to map the seismic waveform Y to latent deep features z,and the Decoder captures the ability to reconstruct high-dimensional waveform Yb from latent deep features z.Later,we put the labeled seismic data into Encoders and get the latent deep features.We can easily use gaussian mixture model to fit the deep feature distribution of each class labeled data.We resample a mass of expansion deep features z* according to the Gaussian mixture model,and put the expansion deep features into the decoder to generate expansion seismic data.The experiments in synthetic and real data show that our method alleviates the problem of lacking labeled seismic data for supervised seismic facies analysis.
文摘A pathological complete response to neoadjuvant chemoradiotherapy offers patients with rectal cancer that has advanced locally the highest chance of survival.However,there is not yet a valid prediction model available.An efficient feature extraction technique is also required to increase a prediction model’s precision.CDAS(cancer data access system)program is a great place to look for cancer along with images or biospecimens.In this study,we look at data from the CDAS system,specifically bowel cancer(colorectal cancer)datasets.This study suggested a survival prediction method for rectal cancer.In addition,this determines which deep learning algorithm works best by comparing their performance in terms of prediction accuracy.The initial job that leads to correct findings is corpus cleansing.Moving forward,the data preprocessing activity will be performed,which will comprise“exploratory data analysis and pruning and normalization or experimental study of data,which is required to obtain data features to design the model for cancer detection at an early stage.”Aside from that,the data corpus is separated into two sub-corpora:training data and test data,which will be utilized to assess the correctness of the constructed model.This study will compare our autoencoder accuracy to that of other deep learning algorithms,such as artificial neural network,convolutional neural network,and restricted Boltzmann machine,before implementing the suggested methodology and displaying the model’s accuracy graphically after the suggested new methodology or algorithm for patients with rectal cancer.Various criteria,including true positive rate,receiver operating characteristic(ROC)curve,and accuracy scores,are used in the experiments to determine the model’s high accuracy.In the end,we determine the accuracy score for each model.The outcomes of the simulation demonstrated that rectal cancer patients may be estimated using prediction models.It is shown that variational deep encoders have excellent accuracy of 94%in this cancer prediction and 95%for ROC curve regions.The findings demonstrate that automated prediction algorithms are capable of properly estimating rectal cancer patients’chances of survival.The best results,with 95%accuracy,were generated by deep autoencoders.
基金This research received funding from the Flemish Government(AI Research Program)This research has received support of Flanders Make,the strategic research center for the manufacturing industry.
文摘Anomaly detection(AD)is an important task in a broad range of domains.A popular choice for AD are Deep Support Vector Data Description models.When learning such models,normal data is mapped close to and anomalous data is mapped far from a center,in some latent space,enabling the construction of a sphere to separate both types of data.Empirically,it was observed:(i)that the center and radius of such sphere largely depend on the training data and model initialization which leads to difficulties when selecting a threshold,and(ii)that the center and radius of this sphere strongly impact the model AD performance on unseen data.In this work,a more robust AD solution is proposed that(i)defines a sphere with a fixed radius and margin in some latent space and(ii)enforces the encoder,which maps the input to a latent space,to encode the normal data in a small sphere and the anomalous data outside a larger sphere,with the same center.Experimental results indicate that the proposed algorithm attains higher performance compared to alternatives,and that the difference in size of the two spheres has a minor impact on the performance.
基金This research work is supported by the Deputyship of Research&Innovation,Ministry of Education in Saudi Arabia(Grant Number 758).
文摘Visual motion segmentation(VMS)is an important and key part of many intelligent crowd systems.It can be used to figure out the flow behavior through a crowd and to spot unusual life-threatening incidents like crowd stampedes and crashes,which pose a serious risk to public safety and have resulted in numerous fatalities over the past few decades.Trajectory clustering has become one of the most popular methods in VMS.However,complex data,such as a large number of samples and parameters,makes it difficult for trajectory clustering to work well with accurate motion segmentation results.This study introduces a spatial-angular stacked sparse autoencoder model(SA-SSAE)with l2-regularization and softmax,a powerful deep learning method for visual motion segmentation to cluster similar motion patterns that belong to the same cluster.The proposed model can extract meaningful high-level features using only spatial-angular features obtained from refined tracklets(a.k.a‘trajectories’).We adopt l2-regularization and sparsity regularization,which can learn sparse representations of features,to guarantee the sparsity of the autoencoders.We employ the softmax layer to map the data points into accurate cluster representations.One of the best advantages of the SA-SSAE framework is it can manage VMS even when individuals move around randomly.This framework helps cluster the motion patterns effectively with higher accuracy.We put forward a new dataset with itsmanual ground truth,including 21 crowd videos.Experiments conducted on two crowd benchmarks demonstrate that the proposed model can more accurately group trajectories than the traditional clustering approaches used in previous studies.The proposed SA-SSAE framework achieved a 0.11 improvement in accuracy and a 0.13 improvement in the F-measure compared with the best current method using the CUHK dataset.
基金supported by the Italian University and Research Consortium CINECA
文摘Fault diagnosis of electric motors is a fundamental task for production line testing, and it is usually performed by experienced human operators. In the recent years, several methods have been proposed in the literature for detecting faults automatically. Deep neural networks have been successfully employed for this task, but, up to the authors' knowledge, they have never been used in an unsupervised scenario. This paper proposes an unsupervised method for diagnosing faults of electric motors by using a novelty detection approach based on deep autoencoders. In the proposed method, vibration signals are acquired by using accelerometers and processed to extract LogMel coefficients as features. Autoencoders are trained by using normal data only, i.e., data that do not contain faults. Three different autoencoders architectures have been evaluated: the multilayer perceptron(MLP) autoencoder, the convolutional neural network autoencoder, and the recurrent autoencoder composed of long short-term memory(LSTM) units. The experiments have been conducted by using a dataset created by the authors, and the proposed approaches have been compared to the one-class support vector machine(OC-SVM) algorithm. The performance has been evaluated in terms area under curve(AUC) of the receiver operating characteristic curve, and the results showed that all the autoencoder-based approaches outperform the OCSVM algorithm. Moreover, the MLP autoencoder is the most performing architecture, achieving an AUC equal to 99.11 %.
基金co-supported by the Key Program of National Natural Science Foundation of China (No. U1533202)the Civil Aviation Administration of China (No. MHRD20150104)Shandong Independent Innovation and Achievements Transformation Fund (No. 2014CGZH1101)
文摘Many existing aircraft engine fault detection methods are highly dependent on performance deviation data that are provided by the original equipment manufacturer. To improve the independent engine fault detection ability, Aircraft Communications Addressing and Reporting System(ACARS) data can be used. However, owing to the characteristics of high dimension, complex correlations between parameters, and large noise content, it is difficult for existing methods to detect faults effectively by using ACARS data. To solve this problem, a novel engine fault detection method based on original ACARS data is proposed. First, inspired by computer vision methods, all variables were divided into separated groups according to their correlations. Then, an improved convolutional denoising autoencoder was used to extract the features of each group. Finally, all of the extracted features were fused to form feature vectors. Thereby, fault samples could be identified based on these feature vectors. Experiments were conducted to validate the effectiveness and efficiency of our method and other competing methods by considering real ACARS data as the data source. The results reveal the good performance of our method with regard to comprehensive fault detection and robustness. Additionally, the computational and time costs of our method are shown to be relatively low.
文摘In this study,the hourly directions of eight banking stocks in Borsa Istanbul were predicted using linear-based,deep-learning(LSTM)and ensemble learning(Light-GBM)models.These models were trained with four different feature sets and their performances were evaluated in terms of accuracy and F-measure metrics.While the first experiments directly used the own stock features as the model inputs,the second experiments utilized reduced stock features through Variational AutoEncoders(VAE).In the last experiments,in order to grasp the effects of the other banking stocks on individual stock performance,the features belonging to other stocks were also given as inputs to our models.While combining other stock features was done for both own(named as allstock_own)and VAE-reduced(named as allstock_VAE)stock features,the expanded dimensions of the feature sets were reduced by Recursive Feature Elimination.As the highest success rate increased up to 0.685 with allstock_own and LSTM with attention model,the combination of allstock_VAE and LSTM with the attention model obtained an accuracy rate of 0.675.Although the classification results achieved with both feature types was close,allstock_VAE achieved these results using nearly 16.67%less features compared to allstock_own.When all experimental results were examined,it was found out that the models trained with allstock_own and allstock_VAE achieved higher accuracy rates than those using individual stock features.It was also concluded that the results obtained with the VAE-reduced stock features were similar to those obtained by own stock features.
基金supported by the National Natural Science Foundation of China(Grant Nos.51979253,51879245)the Fundamental Research Funds for the Central Universities,China University of Geosciences(Wuhan)(Grant No.CUGCJ1821).
文摘Objective and accurate evaluation of rock mass quality classification is the prerequisite for reliable sta-bility assessment.To develop a tool that can deliver quick and accurate evaluation of rock mass quality,a deep learning approach is developed,which uses stacked autoencoders(SAEs)with several autoencoders and a softmax net layer.Ten rock parameters of rock mass rating(RMR)system are calibrated in this model.The model is trained using 75%of the total database for training sample data.The SAEs trained model achieves a nearly 100%prediction accuracy.For comparison,other different models are also trained with the same dataset,using artificial neural network(ANN)and radial basis function(RBF).The results show that the SAEs classify all test samples correctly while the rating accuracies of ANN and RBF are 97.5%and 98.7%,repectively,which are calculated from the confusion matrix.Moreover,this model is further employed to predict the slope risk level of an abandoned quarry.The proposed approach using SAEs,or deep learning in general,is more objective and more accurate and requires less human inter-vention.The findings presented here shall shed light for engineers/researchers interested in analyzing rock mass classification criteria or performing field investigation.
基金supported by the National Natural Science Foundation of China(62203431)。
文摘The widespread usage of Cyber Physical Systems(CPSs)generates a vast volume of time series data,and precisely determining anomalies in the data is critical for practical production.Autoencoder is the mainstream method for time series anomaly detection,and the anomaly is judged by reconstruction error.However,due to the strong generalization ability of neural networks,some abnormal samples close to normal samples may be judged as normal,which fails to detect the abnormality.In addition,the dataset rarely provides sufficient anomaly labels.This research proposes an unsupervised anomaly detection approach based on adversarial memory autoencoders for multivariate time series to solve the above problem.Firstly,an encoder encodes the input data into low-dimensional space to acquire a feature vector.Then,a memory module is used to learn the feature vector’s prototype patterns and update the feature vectors.The updating process allows partial forgetting of information to prevent model overgeneralization.After that,two decoders reconstruct the input data.Finally,this research uses the Peak Over Threshold(POT)method to calculate the threshold to determine anomalous samples from normal samples.This research uses a two-stage adversarial training strategy during model training to enlarge the gap between the reconstruction error of normal and abnormal samples.The proposed method achieves significant anomaly detection results on synthetic and real datasets from power systems,water treatment plants,and computer clusters.The F1 score reached an average of 0.9196 on the five datasets,which is 0.0769 higher than the best baseline method.
基金supported in part by the National Natural Science Foundation of China under Grant 62076199in part by the Open Research Fund of Beijing Key Laboratory of Big Data Technology for Food Safety under Grant BTBD‐2020KF08Beijing Technology and Business University,and in part by the Key R&D project of Shaan'xi Province under Grant 2021GY‐027 and 2022ZDLGY01‐03.
文摘Recently,the autoencoder(AE)based method plays a critical role in the hyperspectral anomaly detection domain.However,due to the strong generalised capacity of AE,the abnormal samples are usually reconstructed well along with the normal background samples.Thus,in order to separate anomalies from the background by calculating reconstruction errors,it can be greatly beneficial to reduce the AE capability for abnormal sample reconstruction while maintaining the background reconstruction performance.A memory‐augmented autoencoder for hyperspectral anomaly detection(MAENet)is proposed to address this challenging problem.Specifically,the proposed MAENet mainly consists of an encoder,a memory module,and a decoder.First,the encoder transforms the original hyperspectral data into the low‐dimensional latent representation.Then,the latent representation is utilised to retrieve the most relevant matrix items in the memory matrix,and the retrieved matrix items will be used to replace the latent representation from the encoder.Finally,the decoder is used to reconstruct the input hyperspectral data using the retrieved memory items.With this strategy,the background can still be reconstructed well while the abnormal samples cannot.Experiments conducted on five real hyperspectral anomaly data sets demonstrate the superiority of the proposed method.
文摘Invoice document digitization is crucial for efficient management in industries.The scanned invoice image is often noisy due to various reasons.This affects the OCR(optical character recognition)detection accuracy.In this paper,letter data obtained from images of invoices are denoised using a modified autoencoder based deep learning method.A stacked denoising autoencoder(SDAE)is implemented with two hidden layers each in encoder network and decoder network.In order to capture the most salient features of training samples,a undercomplete autoencoder is designed with non-linear encoder and decoder function.This autoencoder is regularized for denoising application using a combined loss function which considers both mean square error and binary cross entropy.A dataset consisting of 59,119 letter images,which contains both English alphabets(upper and lower case)and numbers(0 to 9)is prepared from many scanned invoices images and windows true type(.ttf)files,are used for training the neural network.Performance is analyzed in terms of Signal to Noise Ratio(SNR),Peak Signal to Noise Ratio(PSNR),Structural Similarity Index(SSIM)and Universal Image Quality Index(UQI)and compared with other filtering techniques like Nonlocal Means filter,Anisotropic diffusion filter,Gaussian filters and Mean filters.Denoising performance of proposed SDAE is compared with existing SDAE with single loss function in terms of SNR and PSNR values.Results show the superior performance of proposed SDAE method.
基金This work was supported by the National Research Foundation of Korea(NRF)grant funded by the Korean government(MSIT)(No.2021R1A2C2011391)was supported by the Institute of Information&communications Technology Planning&Evaluation(IITP)grant funded by the Korea government(MSIT)(No.2021-0-01806Development of security by design and security management technology in smart factory).
文摘Contemporary attackers,mainly motivated by financial gain,consistently devise sophisticated penetration techniques to access important information or data.The growing use of Internet of Things(IoT)technology in the contemporary convergence environment to connect to corporate networks and cloud-based applications only worsens this situation,as it facilitates multiple new attack vectors to emerge effortlessly.As such,existing intrusion detection systems suffer from performance degradation mainly because of insufficient considerations and poorly modeled detection systems.To address this problem,we designed a blended threat detection approach,considering the possible impact and dimensionality of new attack surfaces due to the aforementioned convergence.We collectively refer to the convergence of different technology sectors as the internet of blended environment.The proposed approach encompasses an ensemble of heterogeneous probabilistic autoencoders that leverage the corresponding advantages of a convolutional variational autoencoder and long short-term memory variational autoencoder.An extensive experimental analysis conducted on the TON_IoT dataset demonstrated 96.02%detection accuracy.Furthermore,performance of the proposed approach was compared with various single model(autoencoder)-based network intrusion detection approaches:autoencoder,variational autoencoder,convolutional variational autoencoder,and long short-term memory variational autoencoder.The proposed model outperformed all compared models,demonstrating F1-score improvements of 4.99%,2.25%,1.92%,and 3.69%,respectively.