Deep neural networks(DNNs)are effective in solving both forward and inverse problems for nonlinear partial differential equations(PDEs).However,conventional DNNs are not effective in handling problems such as delay di...Deep neural networks(DNNs)are effective in solving both forward and inverse problems for nonlinear partial differential equations(PDEs).However,conventional DNNs are not effective in handling problems such as delay differential equations(DDEs)and delay integrodifferential equations(DIDEs)with constant delays,primarily due to their low regularity at delayinduced breaking points.In this paper,a DNN method that combines multi-task learning(MTL)which is proposed to solve both the forward and inverse problems of DIDEs.The core idea of this approach is to divide the original equation into multiple tasks based on the delay,using auxiliary outputs to represent the integral terms,followed by the use of MTL to seamlessly incorporate the properties at the breaking points into the loss function.Furthermore,given the increased training dificulty associated with multiple tasks and outputs,we employ a sequential training scheme to reduce training complexity and provide reference solutions for subsequent tasks.This approach significantly enhances the approximation accuracy of solving DIDEs with DNNs,as demonstrated by comparisons with traditional DNN methods.We validate the effectiveness of this method through several numerical experiments,test various parameter sharing structures in MTL and compare the testing results of these structures.Finally,this method is implemented to solve the inverse problem of nonlinear DIDE and the results show that the unknown parameters of DIDE can be discovered with sparse or noisy data.展开更多
Handling missing data accurately is critical in clinical research, where data quality directly impacts decision-making and patient outcomes. While deep learning (DL) techniques for data imputation have gained attentio...Handling missing data accurately is critical in clinical research, where data quality directly impacts decision-making and patient outcomes. While deep learning (DL) techniques for data imputation have gained attention, challenges remain, especially when dealing with diverse data types. In this study, we introduce a novel data imputation method based on a modified convolutional neural network, specifically, a Deep Residual-Convolutional Neural Network (DRes-CNN) architecture designed to handle missing values across various datasets. Our approach demonstrates substantial improvements over existing imputation techniques by leveraging residual connections and optimized convolutional layers to capture complex data patterns. We evaluated the model on publicly available datasets, including Medical Information Mart for Intensive Care (MIMIC-III and MIMIC-IV), which contain critical care patient data, and the Beijing Multi-Site Air Quality dataset, which measures environmental air quality. The proposed DRes-CNN method achieved a root mean square error (RMSE) of 0.00006, highlighting its high accuracy and robustness. We also compared with Low Light-Convolutional Neural Network (LL-CNN) and U-Net methods, which had RMSE values of 0.00075 and 0.00073, respectively. This represented an improvement of approximately 92% over LL-CNN and 91% over U-Net. The results showed that this DRes-CNN-based imputation method outperforms current state-of-the-art models. These results established DRes-CNN as a reliable solution for addressing missing data.展开更多
This paper addresses the performance degradation issue in a fast radio burst search pipeline based on deep learning.This issue is caused by the class imbalance of the radio frequency interference samples in the traini...This paper addresses the performance degradation issue in a fast radio burst search pipeline based on deep learning.This issue is caused by the class imbalance of the radio frequency interference samples in the training dataset,and one solution is applied to improve the distribution of the training data by augmenting minority class samples using a deep convolutional generative adversarial network.Experi.mental results demonstrate that retraining the deep learning model with the newly generated dataset leads to a new fast radio burst classifier,which effectively reduces false positives caused by periodic wide-band impulsive radio frequency interference,thereby enhancing the performance of the search pipeline.展开更多
With the increasing importance of supply chain transparency,blockchain-based data has emerged as a valuable and verifiable source for analyzing procurement transaction risks.This study extends the mathematical model a...With the increasing importance of supply chain transparency,blockchain-based data has emerged as a valuable and verifiable source for analyzing procurement transaction risks.This study extends the mathematical model and proof of‘the Overall Performance Characteristics of the Supply Chain’to encompass multiple variables within blockchain data.Utilizing graph theory,the model is further developed into a single-layer neural network,which serves as the foundation for constructing two multi-layer deep learning neural network models,Feedforward Neural Network(abbreviated as FNN)and Deep Clustering Network(abbreviated as DCN).Furthermore,this study retrieves corporate data from the Chunghwa Yellow Pages online resource and Taiwan Economic Journal database(abbreviated as TEJ).These data are then virtualized using‘the Metaverse Algorithm’,and the selected virtualized blockchain variables are utilized to train a neural network model for classification.The results demonstrate that a single-layer neural network model,leveraging blockchain data and employing the Proof of Relation algorithm(abbreviated as PoR)as the activation function,effectively identifies anomalous enterprises,which constitute 7.2%of the total sample,aligning with expectations.In contrast,the multi-layer neural network models,DCN and FNN,classify an excessively large proportion of enterprises as anomalous(ranging from one-fourth to one-third),which deviates from expectations.This indicates that deep learning may still be inadequate in effectively capturing or identifying malicious corporate behaviors associated with distortions in procurement transaction data.In other words,procurement transaction blockchain data possesses intrinsic value that cannot be replaced by artificial intelligence(abbreviated as AI).展开更多
The categorization of brain tumors is a significant issue for healthcare applications.Perfect and timely identification of brain tumors is important for employing an effective treatment of this disease.Brain tumors po...The categorization of brain tumors is a significant issue for healthcare applications.Perfect and timely identification of brain tumors is important for employing an effective treatment of this disease.Brain tumors possess high changes in terms of size,shape,and amount,and hence the classification process acts as a more difficult research problem.This paper suggests a deep learning model using the magnetic resonance imaging technique that overcomes the limitations associated with the existing classification methods.The effectiveness of the suggested method depends on the coyote optimization algorithm,also known as the LOBO algorithm,which optimizes the weights of the deep-convolutional neural network classifier.The accuracy,sensitivity,and specificity indices,which are obtained to be 92.40%,94.15%,and 91.92%,respectively,are used to validate the effectiveness of the suggested method.The result suggests that the suggested strategy is superior for effectively classifying brain tumors.展开更多
The ability to accurately predict urban traffic flows is crucial for optimising city operations.Consequently,various methods for forecasting urban traffic have been developed,focusing on analysing historical data to u...The ability to accurately predict urban traffic flows is crucial for optimising city operations.Consequently,various methods for forecasting urban traffic have been developed,focusing on analysing historical data to understand complex mobility patterns.Deep learning techniques,such as graph neural networks(GNNs),are popular for their ability to capture spatio-temporal dependencies.However,these models often become overly complex due to the large number of hyper-parameters involved.In this study,we introduce Dynamic Multi-Graph Spatial-Temporal Graph Neural Ordinary Differential Equation Networks(DMST-GNODE),a framework based on ordinary differential equations(ODEs)that autonomously discovers effective spatial-temporal graph neural network(STGNN)architectures for traffic prediction tasks.The comparative analysis of DMST-GNODE and baseline models indicates that DMST-GNODE model demonstrates superior performance across multiple datasets,consistently achieving the lowest Root Mean Square Error(RMSE)and Mean Absolute Error(MAE)values,alongside the highest accuracy.On the BKK(Bangkok)dataset,it outperformed other models with an RMSE of 3.3165 and an accuracy of 0.9367 for a 20-min interval,maintaining this trend across 40 and 60 min.Similarly,on the PeMS08 dataset,DMST-GNODE achieved the best performance with an RMSE of 19.4863 and an accuracy of 0.9377 at 20 min,demonstrating its effectiveness over longer periods.The Los_Loop dataset results further emphasise this model’s advantage,with an RMSE of 3.3422 and an accuracy of 0.7643 at 20 min,consistently maintaining superiority across all time intervals.These numerical highlights indicate that DMST-GNODE not only outperforms baseline models but also achieves higher accuracy and lower errors across different time intervals and datasets.展开更多
The integration of image analysis through deep learning(DL)into rock classification represents a significant leap forward in geological research.While traditional methods remain invaluable for their expertise and hist...The integration of image analysis through deep learning(DL)into rock classification represents a significant leap forward in geological research.While traditional methods remain invaluable for their expertise and historical context,DL offers a powerful complement by enhancing the speed,objectivity,and precision of the classification process.This research explores the significance of image data augmentation techniques in optimizing the performance of convolutional neural networks(CNNs)for geological image analysis,particularly in the classification of igneous,metamorphic,and sedimentary rock types from rock thin section(RTS)images.This study primarily focuses on classic image augmentation techniques and evaluates their impact on model accuracy and precision.Results demonstrate that augmentation techniques like Equalize significantly enhance the model's classification capabilities,achieving an F1-Score of 0.9869 for igneous rocks,0.9884 for metamorphic rocks,and 0.9929 for sedimentary rocks,representing improvements compared to the baseline original results.Moreover,the weighted average F1-Score across all classes and techniques is 0.9886,indicating an enhancement.Conversely,methods like Distort lead to decreased accuracy and F1-Score,with an F1-Score of 0.949 for igneous rocks,0.954 for metamorphic rocks,and 0.9416 for sedimentary rocks,exacerbating the performance compared to the baseline.The study underscores the practicality of image data augmentation in geological image classification and advocates for the adoption of DL methods in this domain for automation and improved results.The findings of this study can benefit various fields,including remote sensing,mineral exploration,and environmental monitoring,by enhancing the accuracy of geological image analysis both for scientific research and industrial applications.展开更多
During its growth stage,the plant is exposed to various diseases.Detection and early detection of crop diseases is amajor challenge in the horticulture industry.Crop infections can harmtotal crop yield and reduce farm...During its growth stage,the plant is exposed to various diseases.Detection and early detection of crop diseases is amajor challenge in the horticulture industry.Crop infections can harmtotal crop yield and reduce farmers’income if not identified early.Today’s approved method involves a professional plant pathologist to diagnose the disease by visual inspection of the afflicted plant leaves.This is an excellent use case for Community Assessment and Treatment Services(CATS)due to the lengthy manual disease diagnosis process and the accuracy of identification is directly proportional to the skills of pathologists.An alternative to conventional Machine Learning(ML)methods,which require manual identification of parameters for exact results,is to develop a prototype that can be classified without pre-processing.To automatically diagnose tomato leaf disease,this research proposes a hybrid model using the Convolutional Auto-Encoders(CAE)network and the CNN-based deep learning architecture of DenseNet.To date,none of the modern systems described in this paper have a combined model based on DenseNet,CAE,and ConvolutionalNeuralNetwork(CNN)todiagnose the ailments of tomato leaves automatically.Themodelswere trained on a dataset obtained from the Plant Village repository.The dataset consisted of 9920 tomato leaves,and the model-tomodel accuracy ratio was 98.35%.Unlike other approaches discussed in this paper,this hybrid strategy requires fewer training components.Therefore,the training time to classify plant diseases with the trained algorithm,as well as the training time to automatically detect the ailments of tomato leaves,is significantly reduced.展开更多
Developing an accurate and efficient comprehensive water quality prediction model and its assessment method is crucial for the prevention and control of water pollution.Deep learning(DL),as one of the most promising t...Developing an accurate and efficient comprehensive water quality prediction model and its assessment method is crucial for the prevention and control of water pollution.Deep learning(DL),as one of the most promising technologies today,plays a crucial role in the effective assessment of water body health,which is essential for water resource management.This study models using both the original dataset and a dataset augmented with Generative Adversarial Networks(GAN).It integrates optimization algorithms(OA)with Convolutional Neural Networks(CNN)to propose a comprehensive water quality model evaluation method aiming at identifying the optimal models for different pollutants.Specifically,after preprocessing the spectral dataset,data augmentation was conducted to obtain two datasets.Then,six new models were developed on these datasets using particle swarm optimization(PSO),genetic algorithm(GA),and simulated annealing(SA)combined with CNN to simulate and forecast the concentrations of three water pollutants:Chemical Oxygen Demand(COD),Total Nitrogen(TN),and Total Phosphorus(TP).Finally,seven model evaluation methods,including uncertainty analysis,were used to evaluate the constructed models and select the optimal models for the three pollutants.The evaluation results indicate that the GPSCNN model performed best in predicting COD and TP concentrations,while the GGACNN model excelled in TN concentration prediction.Compared to existing technologies,the proposed models and evaluation methods provide a more comprehensive and rapid approach to water body prediction and assessment,offering new insights and methods for water pollution prevention and control.展开更多
The converter steelmaking process represents a pivotal aspect of steel metallurgical production,with the characteristics of the flame at the furnace mouth serving as an indirect indicator of the internal smelting stag...The converter steelmaking process represents a pivotal aspect of steel metallurgical production,with the characteristics of the flame at the furnace mouth serving as an indirect indicator of the internal smelting stage.Effectively identifying and predicting the smelt-ing stage poses a significant challenge within industrial production.Traditional image-based methodologies,which rely on a single static flame image as input,demonstrate low recognition accuracy and inadequately extract the dynamic changes in smelting stage.To address this issue,the present study introduces an innovative recognition model that preprocesses flame video sequences from the furnace mouth and then employs a convolutional recurrent neural network(CRNN)to extract spatiotemporal features and derive recognition outputs.Ad-ditionally,we adopt feature layer visualization techniques to verify the model’s effectiveness and further enhance model performance by integrating the Bayesian optimization algorithm.The results indicate that the ResNet18 with convolutional block attention module(CBAM)in the convolutional layer demonstrates superior image feature extraction capabilities,achieving an accuracy of 90.70%and an area under the curve of 98.05%.The constructed Bayesian optimization-CRNN(BO-CRNN)model exhibits a significant improvement in comprehensive performance,with an accuracy of 97.01%and an area under the curve of 99.85%.Furthermore,statistics on the model’s average recognition time,computational complexity,and parameter quantity(Average recognition time:5.49 ms,floating-point opera-tions per second:18260.21 M(1 M=1×10^(6)),parameters:11.58 M)demonstrate superior performance.Through extensive repeated ex-periments on real-world datasets,the proposed CRNN model is capable of rapidly and accurately identifying smelting stages,offering a novel approach for converter smelting endpoint control.展开更多
Container-based virtualization technology has been more widely used in edge computing environments recently due to its advantages of lighter resource occupation, faster startup capability, and better resource utilizat...Container-based virtualization technology has been more widely used in edge computing environments recently due to its advantages of lighter resource occupation, faster startup capability, and better resource utilization efficiency. To meet the diverse needs of tasks, it usually needs to instantiate multiple network functions in the form of containers interconnect various generated containers to build a Container Cluster(CC). Then CCs will be deployed on edge service nodes with relatively limited resources. However, the increasingly complex and timevarying nature of tasks brings great challenges to optimal placement of CC. This paper regards the charges for various resources occupied by providing services as revenue, the service efficiency and energy consumption as cost, thus formulates a Mixed Integer Programming(MIP) model to describe the optimal placement of CC on edge service nodes. Furthermore, an Actor-Critic based Deep Reinforcement Learning(DRL) incorporating Graph Convolutional Networks(GCN) framework named as RL-GCN is proposed to solve the optimization problem. The framework obtains an optimal placement strategy through self-learning according to the requirements and objectives of the placement of CC. Particularly, through the introduction of GCN, the features of the association relationship between multiple containers in CCs can be effectively extracted to improve the quality of placement.The experiment results show that under different scales of service nodes and task requests, the proposed method can obtain the improved system performance in terms of placement error ratio, time efficiency of solution output and cumulative system revenue compared with other representative baseline methods.展开更多
Social media has emerged as one of the most transformative developments on the internet,revolu-tionizing the way people communicate and interact.However,alongside its benefits,social media has also given rise to signi...Social media has emerged as one of the most transformative developments on the internet,revolu-tionizing the way people communicate and interact.However,alongside its benefits,social media has also given rise to significant challenges,one of the most pressing being cyberbullying.This issue has become a major concern in modern society,particularly due to its profound negative impacts on the mental health and well-being of its victims.In the Arab world,where social media usage is exceptionblly high,cyberbullying has become increasingly prevalent,necessitating urgent attention.Early detection of harmful online behavior is critical to fostering safer digital environments and mitigating the adverse efcts of cyberbullying.This underscores the importance of developing advanced tools and systems to identify and address such behavior efectively.This paper investigates the development of a robust cyberbullying detection and classifcation system tailored for Arabic comments on YouTube.The study explores the efectiveness of various deep learning models,including Bi-LSTM(Bidirectional Long Short Term Memory),LSTM(Long Short-Term Memory),CNN(Convolutional Neural Networks),and a hybrid CNN-LSTM,in classifying Arabic comments into binary classes(bullying or not)and multiclass categories.A comprehensive dataset of 20,000 Arabic YouTube comments was collected,preprocessed,and labeled to support these tasks.The results revealed that the CNN and hybrid CNN-LSTM models achieved the highest accuracy in binary classification,reaching an impressive 91.9%.For multiclass dlassification,the LSTM and Bi-LSTM models outperformed others,achieving an accuracy of 89.5%.These findings highlight the efctiveness of deep learning approaches in the mitigation of cyberbullying within Arabic online communities.展开更多
Urban traffic prediction with high precision is always the unremitting pursuit of intelligent transportation systems and is instrumental in bringing smart cities into reality.The fundamental challenges for traffic pre...Urban traffic prediction with high precision is always the unremitting pursuit of intelligent transportation systems and is instrumental in bringing smart cities into reality.The fundamental challenges for traffic prediction lie in the accurate modelling of spatial and temporal traffic dynamics.Existing approaches mainly focus on modelling the traffic data itself,but do not explore the traffic correlations implicit in origin-destination(OD)data.In this paper,we propose STOD-Net,a dynamic spatial-temporal OD feature-enhanced deep network,to simultaneously predict the in-traffic and out-traffic for each and every region of a city.We model the OD data as dynamic graphs and adopt graph neural networks in STOD-Net to learn a low-dimensional representation for each region.As per the region feature,we design a gating mechanism and operate it on the traffic feature learning to explicitly capture spatial correlations.To further capture the complicated spatial and temporal dependencies among different regions,we propose a novel joint feature,learning block in STOD-Net and transfer the hybrid OD features to each block to make the learning process spatiotemporal-aware.We evaluate the effectiveness of STOD-Net on two benchmark datasets,and experimental results demonstrate that it outperforms the state-of-the-art by approximately 5%in terms of prediction accuracy and considerably improves prediction stability up to 80%in terms of standard deviation.展开更多
Track reconstruction algorithms are critical for polarization measurements.Convolutional neural networks(CNNs)are a promising alternative to traditional moment-based track reconstruction approaches.However,the hexagon...Track reconstruction algorithms are critical for polarization measurements.Convolutional neural networks(CNNs)are a promising alternative to traditional moment-based track reconstruction approaches.However,the hexagonal grid track images obtained using gas pixel detectors(GPDs)for better anisotropy do not match the classical rectangle-based CNN,and converting the track images from hexagonal to square results in a loss of information.We developed a new hexagonal CNN algorithm for track reconstruction and polarization estimation in X-ray polarimeters,which was used to extract the emission angles and absorption points from photoelectron track images and predict the uncer-tainty of the predicted emission angles.The simulated data from the PolarLight test were used to train and test the hexagonal CNN models.For individual energies,the hexagonal CNN algorithm produced 15%-30%improvements in the modulation factor compared to the moment analysis method for 100%polarized data,and its performance was comparable to that of the rectangle-based CNN algorithm that was recently developed by the Imaging X-ray Polarimetry Explorer team,but at a lower computational and storage cost for preprocessing.展开更多
The accurate state of health(SOH)estimation of lithium-ion batteries is crucial for efficient,healthy,and safe operation of battery systems.Extracting meaningful aging information from highly stochastic and noisy data...The accurate state of health(SOH)estimation of lithium-ion batteries is crucial for efficient,healthy,and safe operation of battery systems.Extracting meaningful aging information from highly stochastic and noisy data segments while designing SOH estimation algorithms that efficiently handle the large-scale computational demands of cloud-based battery management systems presents a substantial challenge.In this work,we propose a quantum convolutional neural network(QCNN)model designed for accurate,robust,and generalizable SOH estimation with minimal data and parameter requirements and is compatible with quantum computing cloud platforms in the Noisy Intermediate-Scale Quantum.First,we utilize data from 4 datasets comprising 272 cells,covering 5 chemical compositions,4 rated parameters,and 73operating conditions.We design 5 voltage windows as small as 0.3 V for each cell from incremental capacity peaks for stochastic SOH estimation scenarios generation.We extract 3 effective health indicators(HIs)sequences and develop an automated feature fusion method using quantum rotation gate encoding,achieving an R2of 96%.Subsequently,we design a QCNN whose convolutional layer,constructed with variational quantum circuits,comprises merely 39 parameters.Additionally,we explore the impact of training set size,using strategies,and battery materials on the model’s accuracy.Finally,the QCNN with quantum convolutional layers reduces root mean squared error by 28% and achieves an R^(2)exceeding 96% compared to other three commonly used algorithms.This work demonstrates the effectiveness of quantum encoding for automated feature fusion of HIs extracted from limited discharge data.It highlights the potential of QCNN in improving the accuracy,robustness,and generalization of SOH estimation while dealing with stochastic and noisy data with few parameters and simple structure.It also suggests a new paradigm for leveraging quantum computational power in SOH estimation.展开更多
Grains are the most important food consumed globally,yet their yield can be severely impacted by pest infestations.Addressing this issue,scientists and researchers strive to enhance the yield-to-seed ratio through eff...Grains are the most important food consumed globally,yet their yield can be severely impacted by pest infestations.Addressing this issue,scientists and researchers strive to enhance the yield-to-seed ratio through effective pest detection methods.Traditional approaches often rely on preprocessed datasets,but there is a growing need for solutions that utilize real-time images of pests in their natural habitat.Our study introduces a novel twostep approach to tackle this challenge.Initially,raw images with complex backgrounds are captured.In the subsequent step,feature extraction is performed using both hand-crafted algorithms(Haralick,LBP,and Color Histogram)and modified deep-learning architectures.We propose two models for this purpose:PestNet-EF and PestNet-LF.PestNet-EF uses an early fusion technique to integrate handcrafted and deep learning features,followed by adaptive feature selection methods such as CFS and Recursive Feature Elimination(RFE).PestNet-LF utilizes a late fusion technique,incorporating three additional layers(fully connected,softmax,and classification)to enhance performance.These models were evaluated across 15 classes of pests,including five classes each for rice,corn,and wheat.The performance of our suggested algorithms was tested against the IP102 dataset.Simulation demonstrates that the Pestnet-EF model achieved an accuracy of 96%,and the PestNet-LF model with majority voting achieved the highest accuracy of 94%,while PestNet-LF with the average model attained an accuracy of 92%.Also,the proposed approach was compared with existing methods that rely on hand-crafted and transfer learning techniques,showcasing the effectiveness of our approach in real-time pest detection for improved agricultural yield.展开更多
The analysis of Android malware shows that this threat is constantly increasing and is a real threat to mobile devices since traditional approaches,such as signature-based detection,are no longer effective due to the ...The analysis of Android malware shows that this threat is constantly increasing and is a real threat to mobile devices since traditional approaches,such as signature-based detection,are no longer effective due to the continuously advancing level of sophistication.To resolve this problem,efficient and flexible malware detection tools are needed.This work examines the possibility of employing deep CNNs to detect Android malware by transforming network traffic into image data representations.Moreover,the dataset used in this study is the CIC-AndMal2017,which contains 20,000 instances of network traffic across five distinct malware categories:a.Trojan,b.Adware,c.Ransomware,d.Spyware,e.Worm.These network traffic features are then converted to image formats for deep learning,which is applied in a CNN framework,including the VGG16 pre-trained model.In addition,our approach yielded high performance,yielding an accuracy of 0.92,accuracy of 99.1%,precision of 98.2%,recall of 99.5%,and F1 score of 98.7%.Subsequent improvements to the classification model through changes within the VGG19 framework improved the classification rate to 99.25%.Through the results obtained,it is clear that CNNs are a very effective way to classify Android malware,providing greater accuracy than conventional techniques.The success of this approach also shows the applicability of deep learning in mobile security along with the direction for the future advancement of the real-time detection system and other deeper learning techniques to counter the increasing number of threats emerging in the future.展开更多
The motivation for this study is that the quality of deep fakes is constantly improving,which leads to the need to develop new methods for their detection.The proposed Customized Convolutional Neural Network method in...The motivation for this study is that the quality of deep fakes is constantly improving,which leads to the need to develop new methods for their detection.The proposed Customized Convolutional Neural Network method involves extracting structured data from video frames using facial landmark detection,which is then used as input to the CNN.The customized Convolutional Neural Network method is the date augmented-based CNN model to generate‘fake data’or‘fake images’.This study was carried out using Python and its libraries.We used 242 films from the dataset gathered by the Deep Fake Detection Challenge,of which 199 were made up and the remaining 53 were real.Ten seconds were allotted for each video.There were 318 videos used in all,199 of which were fake and 119 of which were real.Our proposedmethod achieved a testing accuracy of 91.47%,loss of 0.342,and AUC score of 0.92,outperforming two alternative approaches,CNN and MLP-CNN.Furthermore,our method succeeded in greater accuracy than contemporary models such as XceptionNet,Meso-4,EfficientNet-BO,MesoInception-4,VGG-16,and DST-Net.The novelty of this investigation is the development of a new Convolutional Neural Network(CNN)learning model that can accurately detect deep fake face photos.展开更多
The Ultra-Wideband(UWB)Location-Based Service is receiving more and more attention due to its high ranging accuracy and good time resolution.However,the None-Line-of-Sight(NLOS)propagation may reduce the ranging accur...The Ultra-Wideband(UWB)Location-Based Service is receiving more and more attention due to its high ranging accuracy and good time resolution.However,the None-Line-of-Sight(NLOS)propagation may reduce the ranging accuracy for UWB localization system in indoor environment.So it is important to identify LOS and NLOS propagations before taking proper measures to improve the UWB localization accuracy.In this paper,a deep learning-based UWB NLOS/LOS classification algorithm called FCN-Attention is proposed.The proposed FCN-Attention algorithm utilizes a Fully Convolution Network(FCN)for improving feature extraction ability and a self-attention mechanism for enhancing feature description from the data to improve the classification accuracy.The proposed algorithm is evaluated using an open-source dataset,a local collected dataset and a mixed dataset created from these two datasets.The experiment result shows that the proposed FCN-Attention algorithm achieves classification accuracy of 88.24%on the open-source dataset,100%on the local collected dataset and 92.01%on the mixed dataset,which is better than the results from other evaluated NLOS/LOS classification algorithms in most scenarios in this paper.展开更多
Smart Industrial environments use the Industrial Internet of Things(IIoT)for their routine operations and transform their industrial operations with intelligent and driven approaches.However,IIoT devices are vulnerabl...Smart Industrial environments use the Industrial Internet of Things(IIoT)for their routine operations and transform their industrial operations with intelligent and driven approaches.However,IIoT devices are vulnerable to cyber threats and exploits due to their connectivity with the internet.Traditional signature-based IDS are effective in detecting known attacks,but they are unable to detect unknown emerging attacks.Therefore,there is the need for an IDS which can learn from data and detect new threats.Ensemble Machine Learning(ML)and individual Deep Learning(DL)based IDS have been developed,and these individual models achieved low accuracy;however,their performance can be improved with the ensemble stacking technique.In this paper,we have proposed a Deep Stacked Neural Network(DSNN)based IDS,which consists of two stacked Convolutional Neural Network(CNN)models as base learners and Extreme Gradient Boosting(XGB)as the meta learner.The proposed DSNN model was trained and evaluated with the next-generation dataset,TON_IoT.Several pre-processing techniques were applied to prepare a dataset for the model,including ensemble feature selection and the SMOTE technique.Accuracy,precision,recall,F1-score,and false positive rates were used to evaluate the performance of the proposed ensemble model.Our experimental results showed that the accuracy for binary classification is 99.61%,which is better than in the baseline individual DL and ML models.In addition,the model proposed for IDS has been compared with similar models.The proposed DSNN achieved better performance metrics than the other models.The proposed DSNN model will be used to develop enhanced IDS for threat mitigation in smart industrial environments.展开更多
文摘Deep neural networks(DNNs)are effective in solving both forward and inverse problems for nonlinear partial differential equations(PDEs).However,conventional DNNs are not effective in handling problems such as delay differential equations(DDEs)and delay integrodifferential equations(DIDEs)with constant delays,primarily due to their low regularity at delayinduced breaking points.In this paper,a DNN method that combines multi-task learning(MTL)which is proposed to solve both the forward and inverse problems of DIDEs.The core idea of this approach is to divide the original equation into multiple tasks based on the delay,using auxiliary outputs to represent the integral terms,followed by the use of MTL to seamlessly incorporate the properties at the breaking points into the loss function.Furthermore,given the increased training dificulty associated with multiple tasks and outputs,we employ a sequential training scheme to reduce training complexity and provide reference solutions for subsequent tasks.This approach significantly enhances the approximation accuracy of solving DIDEs with DNNs,as demonstrated by comparisons with traditional DNN methods.We validate the effectiveness of this method through several numerical experiments,test various parameter sharing structures in MTL and compare the testing results of these structures.Finally,this method is implemented to solve the inverse problem of nonlinear DIDE and the results show that the unknown parameters of DIDE can be discovered with sparse or noisy data.
基金supported by the Intelligent System Research Group(ISysRG)supported by Universitas Sriwijaya funded by the Competitive Research 2024.
文摘Handling missing data accurately is critical in clinical research, where data quality directly impacts decision-making and patient outcomes. While deep learning (DL) techniques for data imputation have gained attention, challenges remain, especially when dealing with diverse data types. In this study, we introduce a novel data imputation method based on a modified convolutional neural network, specifically, a Deep Residual-Convolutional Neural Network (DRes-CNN) architecture designed to handle missing values across various datasets. Our approach demonstrates substantial improvements over existing imputation techniques by leveraging residual connections and optimized convolutional layers to capture complex data patterns. We evaluated the model on publicly available datasets, including Medical Information Mart for Intensive Care (MIMIC-III and MIMIC-IV), which contain critical care patient data, and the Beijing Multi-Site Air Quality dataset, which measures environmental air quality. The proposed DRes-CNN method achieved a root mean square error (RMSE) of 0.00006, highlighting its high accuracy and robustness. We also compared with Low Light-Convolutional Neural Network (LL-CNN) and U-Net methods, which had RMSE values of 0.00075 and 0.00073, respectively. This represented an improvement of approximately 92% over LL-CNN and 91% over U-Net. The results showed that this DRes-CNN-based imputation method outperforms current state-of-the-art models. These results established DRes-CNN as a reliable solution for addressing missing data.
基金supported by the Chinese Academy of Science"Light of West China"Program(2022-XBQNXZ-015)the National Natural Science Foundation of China(11903071)the Operation,Maintenance and Upgrading Fund for Astronomical Telescopes and Facility Instruments,budgeted from the Ministry of Finance of China and administered by the Chinese Academy of Sciences。
文摘This paper addresses the performance degradation issue in a fast radio burst search pipeline based on deep learning.This issue is caused by the class imbalance of the radio frequency interference samples in the training dataset,and one solution is applied to improve the distribution of the training data by augmenting minority class samples using a deep convolutional generative adversarial network.Experi.mental results demonstrate that retraining the deep learning model with the newly generated dataset leads to a new fast radio burst classifier,which effectively reduces false positives caused by periodic wide-band impulsive radio frequency interference,thereby enhancing the performance of the search pipeline.
文摘With the increasing importance of supply chain transparency,blockchain-based data has emerged as a valuable and verifiable source for analyzing procurement transaction risks.This study extends the mathematical model and proof of‘the Overall Performance Characteristics of the Supply Chain’to encompass multiple variables within blockchain data.Utilizing graph theory,the model is further developed into a single-layer neural network,which serves as the foundation for constructing two multi-layer deep learning neural network models,Feedforward Neural Network(abbreviated as FNN)and Deep Clustering Network(abbreviated as DCN).Furthermore,this study retrieves corporate data from the Chunghwa Yellow Pages online resource and Taiwan Economic Journal database(abbreviated as TEJ).These data are then virtualized using‘the Metaverse Algorithm’,and the selected virtualized blockchain variables are utilized to train a neural network model for classification.The results demonstrate that a single-layer neural network model,leveraging blockchain data and employing the Proof of Relation algorithm(abbreviated as PoR)as the activation function,effectively identifies anomalous enterprises,which constitute 7.2%of the total sample,aligning with expectations.In contrast,the multi-layer neural network models,DCN and FNN,classify an excessively large proportion of enterprises as anomalous(ranging from one-fourth to one-third),which deviates from expectations.This indicates that deep learning may still be inadequate in effectively capturing or identifying malicious corporate behaviors associated with distortions in procurement transaction data.In other words,procurement transaction blockchain data possesses intrinsic value that cannot be replaced by artificial intelligence(abbreviated as AI).
文摘The categorization of brain tumors is a significant issue for healthcare applications.Perfect and timely identification of brain tumors is important for employing an effective treatment of this disease.Brain tumors possess high changes in terms of size,shape,and amount,and hence the classification process acts as a more difficult research problem.This paper suggests a deep learning model using the magnetic resonance imaging technique that overcomes the limitations associated with the existing classification methods.The effectiveness of the suggested method depends on the coyote optimization algorithm,also known as the LOBO algorithm,which optimizes the weights of the deep-convolutional neural network classifier.The accuracy,sensitivity,and specificity indices,which are obtained to be 92.40%,94.15%,and 91.92%,respectively,are used to validate the effectiveness of the suggested method.The result suggests that the suggested strategy is superior for effectively classifying brain tumors.
文摘The ability to accurately predict urban traffic flows is crucial for optimising city operations.Consequently,various methods for forecasting urban traffic have been developed,focusing on analysing historical data to understand complex mobility patterns.Deep learning techniques,such as graph neural networks(GNNs),are popular for their ability to capture spatio-temporal dependencies.However,these models often become overly complex due to the large number of hyper-parameters involved.In this study,we introduce Dynamic Multi-Graph Spatial-Temporal Graph Neural Ordinary Differential Equation Networks(DMST-GNODE),a framework based on ordinary differential equations(ODEs)that autonomously discovers effective spatial-temporal graph neural network(STGNN)architectures for traffic prediction tasks.The comparative analysis of DMST-GNODE and baseline models indicates that DMST-GNODE model demonstrates superior performance across multiple datasets,consistently achieving the lowest Root Mean Square Error(RMSE)and Mean Absolute Error(MAE)values,alongside the highest accuracy.On the BKK(Bangkok)dataset,it outperformed other models with an RMSE of 3.3165 and an accuracy of 0.9367 for a 20-min interval,maintaining this trend across 40 and 60 min.Similarly,on the PeMS08 dataset,DMST-GNODE achieved the best performance with an RMSE of 19.4863 and an accuracy of 0.9377 at 20 min,demonstrating its effectiveness over longer periods.The Los_Loop dataset results further emphasise this model’s advantage,with an RMSE of 3.3422 and an accuracy of 0.7643 at 20 min,consistently maintaining superiority across all time intervals.These numerical highlights indicate that DMST-GNODE not only outperforms baseline models but also achieves higher accuracy and lower errors across different time intervals and datasets.
文摘The integration of image analysis through deep learning(DL)into rock classification represents a significant leap forward in geological research.While traditional methods remain invaluable for their expertise and historical context,DL offers a powerful complement by enhancing the speed,objectivity,and precision of the classification process.This research explores the significance of image data augmentation techniques in optimizing the performance of convolutional neural networks(CNNs)for geological image analysis,particularly in the classification of igneous,metamorphic,and sedimentary rock types from rock thin section(RTS)images.This study primarily focuses on classic image augmentation techniques and evaluates their impact on model accuracy and precision.Results demonstrate that augmentation techniques like Equalize significantly enhance the model's classification capabilities,achieving an F1-Score of 0.9869 for igneous rocks,0.9884 for metamorphic rocks,and 0.9929 for sedimentary rocks,representing improvements compared to the baseline original results.Moreover,the weighted average F1-Score across all classes and techniques is 0.9886,indicating an enhancement.Conversely,methods like Distort lead to decreased accuracy and F1-Score,with an F1-Score of 0.949 for igneous rocks,0.954 for metamorphic rocks,and 0.9416 for sedimentary rocks,exacerbating the performance compared to the baseline.The study underscores the practicality of image data augmentation in geological image classification and advocates for the adoption of DL methods in this domain for automation and improved results.The findings of this study can benefit various fields,including remote sensing,mineral exploration,and environmental monitoring,by enhancing the accuracy of geological image analysis both for scientific research and industrial applications.
基金funded by UKRI EPSRC Grant EP/W020408/1 Project SPRITE+2:The Security,Privacy,Identity,and Trust Engagement Network plus(phase 2)for this studyfunded by PhD project RS718 on Explainable AI through the UKRI EPSRC Grant-funded Doctoral Training Centre at Swansea University.
文摘During its growth stage,the plant is exposed to various diseases.Detection and early detection of crop diseases is amajor challenge in the horticulture industry.Crop infections can harmtotal crop yield and reduce farmers’income if not identified early.Today’s approved method involves a professional plant pathologist to diagnose the disease by visual inspection of the afflicted plant leaves.This is an excellent use case for Community Assessment and Treatment Services(CATS)due to the lengthy manual disease diagnosis process and the accuracy of identification is directly proportional to the skills of pathologists.An alternative to conventional Machine Learning(ML)methods,which require manual identification of parameters for exact results,is to develop a prototype that can be classified without pre-processing.To automatically diagnose tomato leaf disease,this research proposes a hybrid model using the Convolutional Auto-Encoders(CAE)network and the CNN-based deep learning architecture of DenseNet.To date,none of the modern systems described in this paper have a combined model based on DenseNet,CAE,and ConvolutionalNeuralNetwork(CNN)todiagnose the ailments of tomato leaves automatically.Themodelswere trained on a dataset obtained from the Plant Village repository.The dataset consisted of 9920 tomato leaves,and the model-tomodel accuracy ratio was 98.35%.Unlike other approaches discussed in this paper,this hybrid strategy requires fewer training components.Therefore,the training time to classify plant diseases with the trained algorithm,as well as the training time to automatically detect the ailments of tomato leaves,is significantly reduced.
基金Supported by Natural Science Basic Research Plan in Shaanxi Province of China(Program No.2022JM-396)the Strategic Priority Research Program of the Chinese Academy of Sciences,Grant No.XDA23040101+4 种基金Shaanxi Province Key Research and Development Projects(Program No.2023-YBSF-437)Xi'an Shiyou University Graduate Student Innovation Fund Program(Program No.YCX2412041)State Key Laboratory of Air Traffic Management System and Technology(SKLATM202001)Tianjin Education Commission Research Program Project(2020KJ028)Fundamental Research Funds for the Central Universities(3122019132)。
文摘Developing an accurate and efficient comprehensive water quality prediction model and its assessment method is crucial for the prevention and control of water pollution.Deep learning(DL),as one of the most promising technologies today,plays a crucial role in the effective assessment of water body health,which is essential for water resource management.This study models using both the original dataset and a dataset augmented with Generative Adversarial Networks(GAN).It integrates optimization algorithms(OA)with Convolutional Neural Networks(CNN)to propose a comprehensive water quality model evaluation method aiming at identifying the optimal models for different pollutants.Specifically,after preprocessing the spectral dataset,data augmentation was conducted to obtain two datasets.Then,six new models were developed on these datasets using particle swarm optimization(PSO),genetic algorithm(GA),and simulated annealing(SA)combined with CNN to simulate and forecast the concentrations of three water pollutants:Chemical Oxygen Demand(COD),Total Nitrogen(TN),and Total Phosphorus(TP).Finally,seven model evaluation methods,including uncertainty analysis,were used to evaluate the constructed models and select the optimal models for the three pollutants.The evaluation results indicate that the GPSCNN model performed best in predicting COD and TP concentrations,while the GGACNN model excelled in TN concentration prediction.Compared to existing technologies,the proposed models and evaluation methods provide a more comprehensive and rapid approach to water body prediction and assessment,offering new insights and methods for water pollution prevention and control.
基金financially supported by the National Natural Science Foundation of China(No.52374320).
文摘The converter steelmaking process represents a pivotal aspect of steel metallurgical production,with the characteristics of the flame at the furnace mouth serving as an indirect indicator of the internal smelting stage.Effectively identifying and predicting the smelt-ing stage poses a significant challenge within industrial production.Traditional image-based methodologies,which rely on a single static flame image as input,demonstrate low recognition accuracy and inadequately extract the dynamic changes in smelting stage.To address this issue,the present study introduces an innovative recognition model that preprocesses flame video sequences from the furnace mouth and then employs a convolutional recurrent neural network(CRNN)to extract spatiotemporal features and derive recognition outputs.Ad-ditionally,we adopt feature layer visualization techniques to verify the model’s effectiveness and further enhance model performance by integrating the Bayesian optimization algorithm.The results indicate that the ResNet18 with convolutional block attention module(CBAM)in the convolutional layer demonstrates superior image feature extraction capabilities,achieving an accuracy of 90.70%and an area under the curve of 98.05%.The constructed Bayesian optimization-CRNN(BO-CRNN)model exhibits a significant improvement in comprehensive performance,with an accuracy of 97.01%and an area under the curve of 99.85%.Furthermore,statistics on the model’s average recognition time,computational complexity,and parameter quantity(Average recognition time:5.49 ms,floating-point opera-tions per second:18260.21 M(1 M=1×10^(6)),parameters:11.58 M)demonstrate superior performance.Through extensive repeated ex-periments on real-world datasets,the proposed CRNN model is capable of rapidly and accurately identifying smelting stages,offering a novel approach for converter smelting endpoint control.
文摘Container-based virtualization technology has been more widely used in edge computing environments recently due to its advantages of lighter resource occupation, faster startup capability, and better resource utilization efficiency. To meet the diverse needs of tasks, it usually needs to instantiate multiple network functions in the form of containers interconnect various generated containers to build a Container Cluster(CC). Then CCs will be deployed on edge service nodes with relatively limited resources. However, the increasingly complex and timevarying nature of tasks brings great challenges to optimal placement of CC. This paper regards the charges for various resources occupied by providing services as revenue, the service efficiency and energy consumption as cost, thus formulates a Mixed Integer Programming(MIP) model to describe the optimal placement of CC on edge service nodes. Furthermore, an Actor-Critic based Deep Reinforcement Learning(DRL) incorporating Graph Convolutional Networks(GCN) framework named as RL-GCN is proposed to solve the optimization problem. The framework obtains an optimal placement strategy through self-learning according to the requirements and objectives of the placement of CC. Particularly, through the introduction of GCN, the features of the association relationship between multiple containers in CCs can be effectively extracted to improve the quality of placement.The experiment results show that under different scales of service nodes and task requests, the proposed method can obtain the improved system performance in terms of placement error ratio, time efficiency of solution output and cumulative system revenue compared with other representative baseline methods.
基金financed by the European Union-NextGenerationEU,through the National Recowery and Resilience Plan of the Republic of Bulgaria,Project No.BG-RRP-2.013-0001-C01.
文摘Social media has emerged as one of the most transformative developments on the internet,revolu-tionizing the way people communicate and interact.However,alongside its benefits,social media has also given rise to significant challenges,one of the most pressing being cyberbullying.This issue has become a major concern in modern society,particularly due to its profound negative impacts on the mental health and well-being of its victims.In the Arab world,where social media usage is exceptionblly high,cyberbullying has become increasingly prevalent,necessitating urgent attention.Early detection of harmful online behavior is critical to fostering safer digital environments and mitigating the adverse efcts of cyberbullying.This underscores the importance of developing advanced tools and systems to identify and address such behavior efectively.This paper investigates the development of a robust cyberbullying detection and classifcation system tailored for Arabic comments on YouTube.The study explores the efectiveness of various deep learning models,including Bi-LSTM(Bidirectional Long Short Term Memory),LSTM(Long Short-Term Memory),CNN(Convolutional Neural Networks),and a hybrid CNN-LSTM,in classifying Arabic comments into binary classes(bullying or not)and multiclass categories.A comprehensive dataset of 20,000 Arabic YouTube comments was collected,preprocessed,and labeled to support these tasks.The results revealed that the CNN and hybrid CNN-LSTM models achieved the highest accuracy in binary classification,reaching an impressive 91.9%.For multiclass dlassification,the LSTM and Bi-LSTM models outperformed others,achieving an accuracy of 89.5%.These findings highlight the efctiveness of deep learning approaches in the mitigation of cyberbullying within Arabic online communities.
基金supported by the National Natural Science Foundation of China,Grant/Award Number:62401338by the Shandong Province Excellent Youth Science Fund Project(Overseas),Grant/Award Number:2024HWYQ-028by the Fundamental Research Funds of Shandong University.
文摘Urban traffic prediction with high precision is always the unremitting pursuit of intelligent transportation systems and is instrumental in bringing smart cities into reality.The fundamental challenges for traffic prediction lie in the accurate modelling of spatial and temporal traffic dynamics.Existing approaches mainly focus on modelling the traffic data itself,but do not explore the traffic correlations implicit in origin-destination(OD)data.In this paper,we propose STOD-Net,a dynamic spatial-temporal OD feature-enhanced deep network,to simultaneously predict the in-traffic and out-traffic for each and every region of a city.We model the OD data as dynamic graphs and adopt graph neural networks in STOD-Net to learn a low-dimensional representation for each region.As per the region feature,we design a gating mechanism and operate it on the traffic feature learning to explicitly capture spatial correlations.To further capture the complicated spatial and temporal dependencies among different regions,we propose a novel joint feature,learning block in STOD-Net and transfer the hybrid OD features to each block to make the learning process spatiotemporal-aware.We evaluate the effectiveness of STOD-Net on two benchmark datasets,and experimental results demonstrate that it outperforms the state-of-the-art by approximately 5%in terms of prediction accuracy and considerably improves prediction stability up to 80%in terms of standard deviation.
基金supported by the National Natural Science Foundation of China(No.12025301)the Tsinghua University Initiative Scientific Research Program.
文摘Track reconstruction algorithms are critical for polarization measurements.Convolutional neural networks(CNNs)are a promising alternative to traditional moment-based track reconstruction approaches.However,the hexagonal grid track images obtained using gas pixel detectors(GPDs)for better anisotropy do not match the classical rectangle-based CNN,and converting the track images from hexagonal to square results in a loss of information.We developed a new hexagonal CNN algorithm for track reconstruction and polarization estimation in X-ray polarimeters,which was used to extract the emission angles and absorption points from photoelectron track images and predict the uncer-tainty of the predicted emission angles.The simulated data from the PolarLight test were used to train and test the hexagonal CNN models.For individual energies,the hexagonal CNN algorithm produced 15%-30%improvements in the modulation factor compared to the moment analysis method for 100%polarized data,and its performance was comparable to that of the rectangle-based CNN algorithm that was recently developed by the Imaging X-ray Polarimetry Explorer team,but at a lower computational and storage cost for preprocessing.
基金funded by the Research on SOC/SOH Joint Estimation Technology of Electric Vehicle Battery System State Based on Online Parameter Identification Project(2019)the National Natural Science Foundation of China(Grant No.51877120)。
文摘The accurate state of health(SOH)estimation of lithium-ion batteries is crucial for efficient,healthy,and safe operation of battery systems.Extracting meaningful aging information from highly stochastic and noisy data segments while designing SOH estimation algorithms that efficiently handle the large-scale computational demands of cloud-based battery management systems presents a substantial challenge.In this work,we propose a quantum convolutional neural network(QCNN)model designed for accurate,robust,and generalizable SOH estimation with minimal data and parameter requirements and is compatible with quantum computing cloud platforms in the Noisy Intermediate-Scale Quantum.First,we utilize data from 4 datasets comprising 272 cells,covering 5 chemical compositions,4 rated parameters,and 73operating conditions.We design 5 voltage windows as small as 0.3 V for each cell from incremental capacity peaks for stochastic SOH estimation scenarios generation.We extract 3 effective health indicators(HIs)sequences and develop an automated feature fusion method using quantum rotation gate encoding,achieving an R2of 96%.Subsequently,we design a QCNN whose convolutional layer,constructed with variational quantum circuits,comprises merely 39 parameters.Additionally,we explore the impact of training set size,using strategies,and battery materials on the model’s accuracy.Finally,the QCNN with quantum convolutional layers reduces root mean squared error by 28% and achieves an R^(2)exceeding 96% compared to other three commonly used algorithms.This work demonstrates the effectiveness of quantum encoding for automated feature fusion of HIs extracted from limited discharge data.It highlights the potential of QCNN in improving the accuracy,robustness,and generalization of SOH estimation while dealing with stochastic and noisy data with few parameters and simple structure.It also suggests a new paradigm for leveraging quantum computational power in SOH estimation.
基金supported in part by the Basic Science Research Program through the National Research Foundation of Korea(NRF)funded by the Ministry of Education(NRF-2021R1A6A1A03039493)in part by the NRF grant funded by the Korean government(MSIT)(NRF-2022R1A2C1004401).
文摘Grains are the most important food consumed globally,yet their yield can be severely impacted by pest infestations.Addressing this issue,scientists and researchers strive to enhance the yield-to-seed ratio through effective pest detection methods.Traditional approaches often rely on preprocessed datasets,but there is a growing need for solutions that utilize real-time images of pests in their natural habitat.Our study introduces a novel twostep approach to tackle this challenge.Initially,raw images with complex backgrounds are captured.In the subsequent step,feature extraction is performed using both hand-crafted algorithms(Haralick,LBP,and Color Histogram)and modified deep-learning architectures.We propose two models for this purpose:PestNet-EF and PestNet-LF.PestNet-EF uses an early fusion technique to integrate handcrafted and deep learning features,followed by adaptive feature selection methods such as CFS and Recursive Feature Elimination(RFE).PestNet-LF utilizes a late fusion technique,incorporating three additional layers(fully connected,softmax,and classification)to enhance performance.These models were evaluated across 15 classes of pests,including five classes each for rice,corn,and wheat.The performance of our suggested algorithms was tested against the IP102 dataset.Simulation demonstrates that the Pestnet-EF model achieved an accuracy of 96%,and the PestNet-LF model with majority voting achieved the highest accuracy of 94%,while PestNet-LF with the average model attained an accuracy of 92%.Also,the proposed approach was compared with existing methods that rely on hand-crafted and transfer learning techniques,showcasing the effectiveness of our approach in real-time pest detection for improved agricultural yield.
基金funded by the Deanship of Scientific Research at Princess Nourah bint Abdulrahman University,through the Research Funding Program,Grant No.(FRP-1443-15).
文摘The analysis of Android malware shows that this threat is constantly increasing and is a real threat to mobile devices since traditional approaches,such as signature-based detection,are no longer effective due to the continuously advancing level of sophistication.To resolve this problem,efficient and flexible malware detection tools are needed.This work examines the possibility of employing deep CNNs to detect Android malware by transforming network traffic into image data representations.Moreover,the dataset used in this study is the CIC-AndMal2017,which contains 20,000 instances of network traffic across five distinct malware categories:a.Trojan,b.Adware,c.Ransomware,d.Spyware,e.Worm.These network traffic features are then converted to image formats for deep learning,which is applied in a CNN framework,including the VGG16 pre-trained model.In addition,our approach yielded high performance,yielding an accuracy of 0.92,accuracy of 99.1%,precision of 98.2%,recall of 99.5%,and F1 score of 98.7%.Subsequent improvements to the classification model through changes within the VGG19 framework improved the classification rate to 99.25%.Through the results obtained,it is clear that CNNs are a very effective way to classify Android malware,providing greater accuracy than conventional techniques.The success of this approach also shows the applicability of deep learning in mobile security along with the direction for the future advancement of the real-time detection system and other deeper learning techniques to counter the increasing number of threats emerging in the future.
基金Science and Technology Funds from the Liaoning Education Department(Serial Number:LJKZ0104).
文摘The motivation for this study is that the quality of deep fakes is constantly improving,which leads to the need to develop new methods for their detection.The proposed Customized Convolutional Neural Network method involves extracting structured data from video frames using facial landmark detection,which is then used as input to the CNN.The customized Convolutional Neural Network method is the date augmented-based CNN model to generate‘fake data’or‘fake images’.This study was carried out using Python and its libraries.We used 242 films from the dataset gathered by the Deep Fake Detection Challenge,of which 199 were made up and the remaining 53 were real.Ten seconds were allotted for each video.There were 318 videos used in all,199 of which were fake and 119 of which were real.Our proposedmethod achieved a testing accuracy of 91.47%,loss of 0.342,and AUC score of 0.92,outperforming two alternative approaches,CNN and MLP-CNN.Furthermore,our method succeeded in greater accuracy than contemporary models such as XceptionNet,Meso-4,EfficientNet-BO,MesoInception-4,VGG-16,and DST-Net.The novelty of this investigation is the development of a new Convolutional Neural Network(CNN)learning model that can accurately detect deep fake face photos.
基金supported by the National Key Research and Development Program of China[grant No.2016YF B0502200]the Postdoctoral Research Foundation of China[grant No.2020M682480]the Fundamental Research Funds for the Central Universities[grant No.2042021kf0009]。
文摘The Ultra-Wideband(UWB)Location-Based Service is receiving more and more attention due to its high ranging accuracy and good time resolution.However,the None-Line-of-Sight(NLOS)propagation may reduce the ranging accuracy for UWB localization system in indoor environment.So it is important to identify LOS and NLOS propagations before taking proper measures to improve the UWB localization accuracy.In this paper,a deep learning-based UWB NLOS/LOS classification algorithm called FCN-Attention is proposed.The proposed FCN-Attention algorithm utilizes a Fully Convolution Network(FCN)for improving feature extraction ability and a self-attention mechanism for enhancing feature description from the data to improve the classification accuracy.The proposed algorithm is evaluated using an open-source dataset,a local collected dataset and a mixed dataset created from these two datasets.The experiment result shows that the proposed FCN-Attention algorithm achieves classification accuracy of 88.24%on the open-source dataset,100%on the local collected dataset and 92.01%on the mixed dataset,which is better than the results from other evaluated NLOS/LOS classification algorithms in most scenarios in this paper.
文摘Smart Industrial environments use the Industrial Internet of Things(IIoT)for their routine operations and transform their industrial operations with intelligent and driven approaches.However,IIoT devices are vulnerable to cyber threats and exploits due to their connectivity with the internet.Traditional signature-based IDS are effective in detecting known attacks,but they are unable to detect unknown emerging attacks.Therefore,there is the need for an IDS which can learn from data and detect new threats.Ensemble Machine Learning(ML)and individual Deep Learning(DL)based IDS have been developed,and these individual models achieved low accuracy;however,their performance can be improved with the ensemble stacking technique.In this paper,we have proposed a Deep Stacked Neural Network(DSNN)based IDS,which consists of two stacked Convolutional Neural Network(CNN)models as base learners and Extreme Gradient Boosting(XGB)as the meta learner.The proposed DSNN model was trained and evaluated with the next-generation dataset,TON_IoT.Several pre-processing techniques were applied to prepare a dataset for the model,including ensemble feature selection and the SMOTE technique.Accuracy,precision,recall,F1-score,and false positive rates were used to evaluate the performance of the proposed ensemble model.Our experimental results showed that the accuracy for binary classification is 99.61%,which is better than in the baseline individual DL and ML models.In addition,the model proposed for IDS has been compared with similar models.The proposed DSNN achieved better performance metrics than the other models.The proposed DSNN model will be used to develop enhanced IDS for threat mitigation in smart industrial environments.