Since the effectiveness of extracting fault features is not high under traditional bearing fault diagnosis method, a bearing fault diagnosis method based on Deep Auto-encoder Network (DAEN) optimized by Cloud Adaptive...Since the effectiveness of extracting fault features is not high under traditional bearing fault diagnosis method, a bearing fault diagnosis method based on Deep Auto-encoder Network (DAEN) optimized by Cloud Adaptive Particle Swarm Optimization (CAPSO) was proposed. On the basis of analyzing CAPSO and DAEN, the CAPSO-DAEN fault diagnosis model is built. The model uses the randomness and stability of CAPSO algorithm to optimize the connection weight of DAEN, to reduce the constraints on the weights and extract fault features adaptively. Finally, efficient and accurate fault diagnosis can be implemented with the Softmax classifier. The results of test show that the proposed method has higher diagnostic accuracy and more stable diagnosis results than those based on the DAEN, Support Vector Machine (SVM) and the Back Propagation algorithm (BP) under appropriate parameters.展开更多
Big data has ushered in an era of unprecedented access to vast amounts of new,unstructured data,particularly in the realm of sensitive information.It presents unique opportunities for enhancing risk alerting systems,b...Big data has ushered in an era of unprecedented access to vast amounts of new,unstructured data,particularly in the realm of sensitive information.It presents unique opportunities for enhancing risk alerting systems,but also poses challenges in terms of extraction and analysis due to its diverse file formats.This paper proposes the utilization of a DAE-based(Deep Auto-encoders)model for projecting risk associated with financial data.The research delves into the development of an indicator assessing the degree to which organizations successfully avoid displaying bias in handling financial information.Simulation results demonstrate the superior performance of the DAE algorithm,showcasing fewer false positives,improved overall detection rates,and a noteworthy 9%reduction in failure jitter.The optimized DAE algorithm achieves an accuracy of 99%,surpassing existing methods,thereby presenting a robust solution for sensitive data risk projection.展开更多
Accurate time synchronization is fundamental to the correct and efficient operation of Wireless Sensor Networks(WSNs),especially in security-critical,time-sensitive applications.However,most existing protocols degrade...Accurate time synchronization is fundamental to the correct and efficient operation of Wireless Sensor Networks(WSNs),especially in security-critical,time-sensitive applications.However,most existing protocols degrade substantially under malicious interference.We introduce iSTSP,an Intelligent and Secure Time Synchronization Protocol that implements a four-stage defense pipeline to ensure robust,precise synchronization even in hostile environments:(1)trust preprocessing that filters node participation using behavioral trust scoring;(2)anomaly isolation employing a lightweight autoencoder to detect and excise malicious nodes in real time;(3)reliability-weighted consensus that prioritizes high-trust nodes during time aggregation;and(4)convergence-optimized synchronization that dynamically adjusts parameters using theoretical stability bounds.We provide rigorous convergence analysis including a closed-form expression for convergence time,and validate the protocol through both simulations and realworld experiments on a controlled 16-node testbed.Under Sybil attacks with five malicious nodes within this testbed,iSTSP maintains synchronization error increases under 12%and achieves a rapid convergence.Compared to state-ofthe-art protocols like TPSN,SE-FTSP,and MMAR-CTS,iSTSP offers 60%faster detection,broader threat coverage,and more than 7 times lower synchronization error,with a modest 9.3%energy overhead over 8 h.We argue this is an acceptable trade-off for mission-critical deployments requiring guaranteed security.These findings demonstrate iSTSP’s potential as a reliable solution for secure WSN synchronization and motivate future work on large-scale IoT deployments and integration with energy-efficient communication protocols.展开更多
Deep neural networks(DNNs)are effective in solving both forward and inverse problems for nonlinear partial differential equations(PDEs).However,conventional DNNs are not effective in handling problems such as delay di...Deep neural networks(DNNs)are effective in solving both forward and inverse problems for nonlinear partial differential equations(PDEs).However,conventional DNNs are not effective in handling problems such as delay differential equations(DDEs)and delay integrodifferential equations(DIDEs)with constant delays,primarily due to their low regularity at delayinduced breaking points.In this paper,a DNN method that combines multi-task learning(MTL)which is proposed to solve both the forward and inverse problems of DIDEs.The core idea of this approach is to divide the original equation into multiple tasks based on the delay,using auxiliary outputs to represent the integral terms,followed by the use of MTL to seamlessly incorporate the properties at the breaking points into the loss function.Furthermore,given the increased training dificulty associated with multiple tasks and outputs,we employ a sequential training scheme to reduce training complexity and provide reference solutions for subsequent tasks.This approach significantly enhances the approximation accuracy of solving DIDEs with DNNs,as demonstrated by comparisons with traditional DNN methods.We validate the effectiveness of this method through several numerical experiments,test various parameter sharing structures in MTL and compare the testing results of these structures.Finally,this method is implemented to solve the inverse problem of nonlinear DIDE and the results show that the unknown parameters of DIDE can be discovered with sparse or noisy data.展开更多
Metabolomics covers a wide range of applications in life sciences,biomedicine,and phytology.Data acquisition(to achieve high coverage and efficiency)and analysis(to pursue good classification)are two key segments invo...Metabolomics covers a wide range of applications in life sciences,biomedicine,and phytology.Data acquisition(to achieve high coverage and efficiency)and analysis(to pursue good classification)are two key segments involved in metabolomics workflows.Various chemometric approaches utilizing either pattern recognition or machine learning have been employed to separate different groups.However,insufficient feature extraction,inappropriate feature selection,overfitting,or underfitting lead to an insufficient capacity to discriminate plants that are often easily confused.Using two ginseng varieties,namely Panax japonicus(PJ)and Panax japonicus var.major(PJvm),containing the similar ginsenosides,we integrated pseudo-targeted metabolomics and deep neural network(DNN)modeling to achieve accurate species differentiation.A pseudo-targeted metabolomics approach was optimized through data acquisition mode,ion pairs generation,comparison between multiple reaction monitoring(MRM)and scheduled MRM(sMRM),and chromatographic elution gradient.In total,1980 ion pairs were monitored within 23 min,allowing for the most comprehensive ginseng metabolome analysis.The established DNN model demonstrated excellent classification performance(in terms of accuracy,precision,recall,F1 score,area under the curve,and receiver operating characteristic(ROC))using the entire metabolome data and feature-selection dataset,exhibiting superior advantages over random forest(RF),support vector machine(SVM),extreme gradient boosting(XGBoost),and multilayer perceptron(MLP).Moreover,DNNs were advantageous for automated feature learning,nonlinear modeling,adaptability,and generalization.This study confirmed practicality of the established strategy for efficient metabolomics data analysis and reliable classification performance even when using small-volume samples.This established approach holds promise for plant metabolomics and is not limited to ginseng.展开更多
The growing incidence of cyberattacks necessitates a robust and effective Intrusion Detection Systems(IDS)for enhanced network security.While conventional IDSs can be unsuitable for detecting different and emerging at...The growing incidence of cyberattacks necessitates a robust and effective Intrusion Detection Systems(IDS)for enhanced network security.While conventional IDSs can be unsuitable for detecting different and emerging attacks,there is a demand for better techniques to improve detection reliability.This study introduces a new method,the Deep Adaptive Multi-Layer Attention Network(DAMLAN),to boost the result of intrusion detection on network data.Due to its multi-scale attention mechanisms and graph features,DAMLAN aims to address both known and unknown intrusions.The real-world NSL-KDD dataset,a popular choice among IDS researchers,is used to assess the proposed model.There are 67,343 normal samples and 58,630 intrusion attacks in the training set,12,833 normal samples,and 9711 intrusion attacks in the test set.Thus,the proposed DAMLAN method is more effective than the standard models due to the consideration of patterns by the attention layers.The experimental performance of the proposed model demonstrates that it achieves 99.26%training accuracy and 90.68%testing accuracy,with precision reaching 98.54%on the training set and 96.64%on the testing set.The recall and F1 scores again support the model with training set values of 99.90%and 99.21%and testing set values of 86.65%and 91.37%.These results provide a strong basis for the claims made regarding the model’s potential to identify intrusion attacks and affirm its relatively strong overall performance,irrespective of type.Future work would employ more attempts to extend the scalability and applicability of DAMLAN for real-time use in intrusion detection systems.展开更多
Handling missing data accurately is critical in clinical research, where data quality directly impacts decision-making and patient outcomes. While deep learning (DL) techniques for data imputation have gained attentio...Handling missing data accurately is critical in clinical research, where data quality directly impacts decision-making and patient outcomes. While deep learning (DL) techniques for data imputation have gained attention, challenges remain, especially when dealing with diverse data types. In this study, we introduce a novel data imputation method based on a modified convolutional neural network, specifically, a Deep Residual-Convolutional Neural Network (DRes-CNN) architecture designed to handle missing values across various datasets. Our approach demonstrates substantial improvements over existing imputation techniques by leveraging residual connections and optimized convolutional layers to capture complex data patterns. We evaluated the model on publicly available datasets, including Medical Information Mart for Intensive Care (MIMIC-III and MIMIC-IV), which contain critical care patient data, and the Beijing Multi-Site Air Quality dataset, which measures environmental air quality. The proposed DRes-CNN method achieved a root mean square error (RMSE) of 0.00006, highlighting its high accuracy and robustness. We also compared with Low Light-Convolutional Neural Network (LL-CNN) and U-Net methods, which had RMSE values of 0.00075 and 0.00073, respectively. This represented an improvement of approximately 92% over LL-CNN and 91% over U-Net. The results showed that this DRes-CNN-based imputation method outperforms current state-of-the-art models. These results established DRes-CNN as a reliable solution for addressing missing data.展开更多
The fifth-generation (5G) communication requires a highly accurate estimation of the channel state information (CSI)to take advantage of the massive multiple-input multiple-output(MIMO) system. However, traditional ch...The fifth-generation (5G) communication requires a highly accurate estimation of the channel state information (CSI)to take advantage of the massive multiple-input multiple-output(MIMO) system. However, traditional channel estimation methods do not always yield reliable estimates. The methodology of this paper consists of deep residual shrinkage network (DRSN)neural network-based method that is used to solve this problem.Thus, the channel estimation approach, based on DRSN with its learning ability of noise-containing data, is first introduced. Then,the DRSN is used to train the noise reduction process based on the results of the least square (LS) channel estimation while applying the pilot frequency subcarriers, where the initially estimated subcarrier channel matrix is considered as a three-dimensional tensor of the DRSN input. Afterward, a mixed signal to noise ratio (SNR) training data strategy is proposed based on the learning ability of DRSN under different SNRs. Moreover, a joint mixed scenario training strategy is carried out to test the multi scenarios robustness of DRSN. As for the findings, the numerical results indicate that the DRSN method outperforms the spatial-frequency-temporal convolutional neural networks (SF-CNN)with similar computational complexity and achieves better advantages in the full SNR range than the minimum mean squared error (MMSE) estimator with a limited dataset. Moreover, the DRSN approach shows robustness in different propagation environments.展开更多
This paper addresses the performance degradation issue in a fast radio burst search pipeline based on deep learning.This issue is caused by the class imbalance of the radio frequency interference samples in the traini...This paper addresses the performance degradation issue in a fast radio burst search pipeline based on deep learning.This issue is caused by the class imbalance of the radio frequency interference samples in the training dataset,and one solution is applied to improve the distribution of the training data by augmenting minority class samples using a deep convolutional generative adversarial network.Experi.mental results demonstrate that retraining the deep learning model with the newly generated dataset leads to a new fast radio burst classifier,which effectively reduces false positives caused by periodic wide-band impulsive radio frequency interference,thereby enhancing the performance of the search pipeline.展开更多
With the rise of encrypted traffic,traditional network analysis methods have become less effective,leading to a shift towards deep learning-based approaches.Among these,multimodal learning-based classification methods...With the rise of encrypted traffic,traditional network analysis methods have become less effective,leading to a shift towards deep learning-based approaches.Among these,multimodal learning-based classification methods have gained attention due to their ability to leverage diverse feature sets from encrypted traffic,improving classification accuracy.However,existing research predominantly relies on late fusion techniques,which hinder the full utilization of deep features within the data.To address this limitation,we propose a novel multimodal encrypted traffic classification model that synchronizes modality fusion with multiscale feature extraction.Specifically,our approach performs real-time fusion of modalities at each stage of feature extraction,enhancing feature representation at each level and preserving inter-level correlations for more effective learning.This continuous fusion strategy improves the model’s ability to detect subtle variations in encrypted traffic,while boosting its robustness and adaptability to evolving network conditions.Experimental results on two real-world encrypted traffic datasets demonstrate that our method achieves a classification accuracy of 98.23% and 97.63%,outperforming existing multimodal learning-based methods.展开更多
Electric vehicles(EVs)are gradually being deployed in the transportation sector.Although they have a high impact on reducing greenhouse gas emissions,their penetration is challenged by their random energy demand and d...Electric vehicles(EVs)are gradually being deployed in the transportation sector.Although they have a high impact on reducing greenhouse gas emissions,their penetration is challenged by their random energy demand and difficult scheduling of their optimal charging.To cope with these problems,this paper presents a novel approach for photovoltaic grid-connected microgrid EV charging station energy demand forecasting.The present study is part of a comprehensive framework involving emerging technologies such as drones and artificial intelligence designed to support the EVs’charging scheduling task.By using predictive algorithms for solar generation and load demand estimation,this approach aimed at ensuring dynamic and efficient energy flow between the solar energy source,the grid and the electric vehicles.The main contribution of this paper lies in developing an intelligent approach based on deep recurrent neural networks to forecast the energy demand using only its previous records.Therefore,various forecasters based on Long Short-term Memory,Gated Recurrent Unit,and their bi-directional and stacked variants were investigated using a real dataset collected from an EV charging station located at Trieste University(Italy).The developed forecasters have been evaluated and compared according to different metrics,including R,RMSE,MAE,and MAPE.We found that the obtained R values for both PV power generation and energy demand ranged between 97%and 98%.These study findings can be used for reliable and efficient decision-making on the management side of the optimal scheduling of the charging operations.展开更多
With the rapid advancement of mobile communication networks,key technologies such as Multi-access Edge Computing(MEC)and Network Function Virtualization(NFV)have enhanced the quality of service for 5G users but have a...With the rapid advancement of mobile communication networks,key technologies such as Multi-access Edge Computing(MEC)and Network Function Virtualization(NFV)have enhanced the quality of service for 5G users but have also significantly increased the complexity of network threats.Traditional static defense mechanisms are inadequate for addressing the dynamic and heterogeneous nature of modern attack vectors.To overcome these challenges,this paper presents a novel algorithmic framework,SD-5G,designed for high-precision intrusion detection in 5G environments.SD-5G adopts a three-stage architecture comprising traffic feature extraction,elastic representation,and adaptive classification.Specifically,an enhanced Concrete Autoencoder(CAE)is employed to reconstruct and compress high-dimensional network traffic features,producing compact and expressive representations suitable for large-scale 5G deployments.To further improve accuracy in ambiguous traffic classification,a Residual Convolutional Long Short-Term Memory model with an attention mechanism(ResCLA)is introduced,enabling multi-level modeling of spatial–temporal dependencies and effective detection of subtle anomalies.Extensive experiments on benchmark datasets—including 5G-NIDD,CIC-IDS2017,ToN-IoT,and BoT-IoT—demonstrate that SD-5G consistently achieves F1 scores exceeding 99.19%across diverse network environments,indicating strong generalization and real-time deployment capabilities.Overall,SD-5G achieves a balance between detection accuracy and deployment efficiency,offering a scalable,flexible,and effective solution for intrusion detection in 5G and next-generation networks.展开更多
Fundoscopic diagnosis involves assessing the proper functioning of the eye’s nerves,blood vessels,retinal health,and the impact of diabetes on the optic nerves.Fundus disorders are a major global health concern,affec...Fundoscopic diagnosis involves assessing the proper functioning of the eye’s nerves,blood vessels,retinal health,and the impact of diabetes on the optic nerves.Fundus disorders are a major global health concern,affecting millions of people worldwide due to their widespread occurrence.Fundus photography generates machine-based eye images that assist in diagnosing and treating ocular diseases such as diabetic retinopathy.As a result,accurate fundus detection is essential for early diagnosis and effective treatment,helping to prevent severe complications and improve patient outcomes.To address this need,this article introduces a Derivative Model for Fundus Detection using Deep NeuralNetworks(DMFD-DNN)to enhance diagnostic precision.Thismethod selects key features for fundus detection using the least derivative,which identifies features correlating with stored fundus images.Feature filtering relies on the minimum derivative,determined by extracting both similar and varying textures.In this research,the DNN model was integrated with the derivative model.Fundus images were segmented,features were extracted,and the DNN was iteratively trained to identify fundus regions reliably.The goal was to improve the precision of fundoscopic diagnosis by training the DNN incrementally,taking into account the least possible derivative across iterations,and using outputs from previous cycles.The hidden layer of the neural network operates on the most significant derivative,which may reduce precision across iterations.These derivatives are treated as inaccurate,and the model is subsequently trained using selective features and their corresponding extractions.The proposed model outperforms previous techniques in detecting fundus regions,achieving 94.98%accuracy and 91.57%sensitivity,with a minimal error rate of 5.43%.It significantly reduces feature extraction time to 1.462 s and minimizes computational overhead,thereby improving operational efficiency and scalability.Ultimately,the proposed model enhances diagnostic precision and reduces errors,leading to more effective fundus dysfunction diagnosis and treatment.展开更多
Deep neural networks(DNNs)have found extensive applications in safety-critical artificial intelligence systems,such as autonomous driving and facial recognition systems.However,recent research has revealed their susce...Deep neural networks(DNNs)have found extensive applications in safety-critical artificial intelligence systems,such as autonomous driving and facial recognition systems.However,recent research has revealed their susceptibility to backdoors maliciously injected by adversaries.This vulnerability arises due to the intricate architecture and opacity of DNNs,resulting in numerous redundant neurons embedded within the models.Adversaries exploit these vulnerabilities to conceal malicious backdoor information within DNNs,thereby causing erroneous outputs and posing substantial threats to the efficacy of DNN-based applications.This article presents a comprehensive survey of backdoor attacks against DNNs and the countermeasure methods employed to mitigate them.Initially,we trace the evolution of the concept from traditional backdoor attacks to backdoor attacks against DNNs,highlighting the feasibility and practicality of generating backdoor attacks against DNNs.Subsequently,we provide an overview of notable works encompassing various attack and defense strategies,facilitating a comparative analysis of their approaches.Through these discussions,we offer constructive insights aimed at refining these techniques.Finally,we extend our research perspective to the domain of large language models(LLMs)and synthesize the characteristics and developmental trends of backdoor attacks and defense methods targeting LLMs.Through a systematic review of existing studies on backdoor vulnerabilities in LLMs,we identify critical open challenges in this field and propose actionable directions for future research.展开更多
With the increasing importance of supply chain transparency,blockchain-based data has emerged as a valuable and verifiable source for analyzing procurement transaction risks.This study extends the mathematical model a...With the increasing importance of supply chain transparency,blockchain-based data has emerged as a valuable and verifiable source for analyzing procurement transaction risks.This study extends the mathematical model and proof of‘the Overall Performance Characteristics of the Supply Chain’to encompass multiple variables within blockchain data.Utilizing graph theory,the model is further developed into a single-layer neural network,which serves as the foundation for constructing two multi-layer deep learning neural network models,Feedforward Neural Network(abbreviated as FNN)and Deep Clustering Network(abbreviated as DCN).Furthermore,this study retrieves corporate data from the Chunghwa Yellow Pages online resource and Taiwan Economic Journal database(abbreviated as TEJ).These data are then virtualized using‘the Metaverse Algorithm’,and the selected virtualized blockchain variables are utilized to train a neural network model for classification.The results demonstrate that a single-layer neural network model,leveraging blockchain data and employing the Proof of Relation algorithm(abbreviated as PoR)as the activation function,effectively identifies anomalous enterprises,which constitute 7.2%of the total sample,aligning with expectations.In contrast,the multi-layer neural network models,DCN and FNN,classify an excessively large proportion of enterprises as anomalous(ranging from one-fourth to one-third),which deviates from expectations.This indicates that deep learning may still be inadequate in effectively capturing or identifying malicious corporate behaviors associated with distortions in procurement transaction data.In other words,procurement transaction blockchain data possesses intrinsic value that cannot be replaced by artificial intelligence(abbreviated as AI).展开更多
Underwater wireless sensor networks(UWSNs)have emerged as a new paradigm of real-time organized systems,which are utilized in a diverse array of scenarios to manage the underwater environment surrounding them.One of t...Underwater wireless sensor networks(UWSNs)have emerged as a new paradigm of real-time organized systems,which are utilized in a diverse array of scenarios to manage the underwater environment surrounding them.One of the major challenges that these systems confront is topology control via clustering,which reduces the overload of wireless communications within a network and ensures low energy consumption and good scalability.This study aimed to present a clustering technique in which the clustering process and cluster head(CH)selection are performed based on the Markov decision process and deep reinforcement learning(DRL).DRL algorithm selects the CH by maximizing the defined reward function.Subsequently,the sensed data are collected by the CHs and then sent to the autonomous underwater vehicles.In the final phase,the consumed energy by each sensor is calculated,and its residual energy is updated.Then,the autonomous underwater vehicle performs all clustering and CH selection operations.This procedure persists until the point of cessation when the sensor’s power has been reduced to such an extent that no node can become a CH.Through analysis of the findings from this investigation and their comparison with alternative frameworks,the implementation of this method can be used to control the cluster size and the number of CHs,which ultimately augments the energy usage of nodes and prolongs the lifespan of the network.Our simulation results illustrate that the suggested methodology surpasses the conventional low-energy adaptive clustering hierarchy,the distance-and energy-constrained K-means clustering scheme,and the vector-based forward protocol and is viable for deployment in an actual operational environment.展开更多
Deep neural networks provide accurate results for most applications.However,they need a big dataset to train properly.Providing a big dataset is a significant challenge in most applications.Image augmentation refers t...Deep neural networks provide accurate results for most applications.However,they need a big dataset to train properly.Providing a big dataset is a significant challenge in most applications.Image augmentation refers to techniques that increase the amount of image data.Common operations for image augmentation include changes in illumination,rotation,contrast,size,viewing angle,and others.Recently,Generative Adversarial Networks(GANs)have been employed for image generation.However,like image augmentation methods,GAN approaches can only generate images that are similar to the original images.Therefore,they also cannot generate new classes of data.Texture images presentmore challenges than general images,and generating textures is more complex than creating other types of images.This study proposes a gradient-based deep neural network method that generates a new class of texture.It is possible to rapidly generate new classes of textures using different kernels from pre-trained deep networks.After generating new textures for each class,the number of textures increases through image augmentation.During this process,several techniques are proposed to automatically remove incomplete and similar textures that are created.The proposed method is faster than some well-known generative networks by around 4 to 10 times.In addition,the quality of the generated textures surpasses that of these networks.The proposed method can generate textures that surpass those of someGANs and parametric models in certain image qualitymetrics.It can provide a big texture dataset to train deep networks.A new big texture dataset is created artificially using the proposed method.This dataset is approximately 2 GB in size and comprises 30,000 textures,each 150×150 pixels in size,organized into 600 classes.It is uploaded to the Kaggle site and Google Drive.This dataset is called BigTex.Compared to other texture datasets,the proposed dataset is the largest and can serve as a comprehensive texture dataset for training more powerful deep neural networks and mitigating overfitting.展开更多
With the advancement of Vehicle-to-Everything(V2X)technology,efficient resource allocation in dynamic vehicular networks has become a critical challenge for achieving optimal performance.Existing methods suffer from h...With the advancement of Vehicle-to-Everything(V2X)technology,efficient resource allocation in dynamic vehicular networks has become a critical challenge for achieving optimal performance.Existing methods suffer from high computational complexity and decision latency under high-density traffic and heterogeneous network conditions.To address these challenges,this study presents an innovative framework that combines Graph Neural Networks(GNNs)with a Double Deep Q-Network(DDQN),utilizing dynamic graph structures and reinforcement learning.An adaptive neighbor sampling mechanism is introduced to dynamically select the most relevant neighbors based on interference levels and network topology,thereby improving decision accuracy and efficiency.Meanwhile,the framework models communication links as nodes and interference relationships as edges,effectively capturing the direct impact of interference on resource allocation while reducing computational complexity and preserving critical interaction information.Employing an aggregation mechanism based on the Graph Attention Network(GAT),it dynamically adjusts the neighbor sampling scope and performs attention-weighted aggregation based on node importance,ensuring more efficient and adaptive resource management.This design ensures reliable Vehicle-to-Vehicle(V2V)communication while maintaining high Vehicle-to-Infrastructure(V2I)throughput.The framework retains the global feature learning capabilities of GNNs and supports distributed network deployment,allowing vehicles to extract low-dimensional graph embeddings from local observations for real-time resource decisions.Experimental results demonstrate that the proposed method significantly reduces computational overhead,mitigates latency,and improves resource utilization efficiency in vehicular networks under complex traffic scenarios.This research not only provides a novel solution to resource allocation challenges in V2X networks but also advances the application of DDQN in intelligent transportation systems,offering substantial theoretical significance and practical value.展开更多
Significant breakthroughs in the Internet of Things(IoT)and 5G technologies have driven several smart healthcare activities,leading to a flood of computationally intensive applications in smart healthcare networks.Mob...Significant breakthroughs in the Internet of Things(IoT)and 5G technologies have driven several smart healthcare activities,leading to a flood of computationally intensive applications in smart healthcare networks.Mobile Edge Computing(MEC)is considered as an efficient solution to provide powerful computing capabilities to latency or energy sensitive nodes.The low-latency and high-reliability requirements of healthcare application services can be met through optimal offloading and resource allocation for the computational tasks of the nodes.In this study,we established a system model consisting of two types of nodes by considering nondivisible and trade-off computational tasks between latency and energy consumption.To minimize processing cost of the system tasks,a Mixed-Integer Nonlinear Programming(MINLP)task offloading problem is proposed.Furthermore,this problem is decomposed into task offloading decisions and resource allocation problems.The resource allocation problem is solved using traditional optimization algorithms,and the offloading decision problem is solved using a deep reinforcement learning algorithm.We propose an Online Offloading based on the Deep Reinforcement Learning(OO-DRL)algorithm with parallel deep neural networks and a weightsensitive experience replay mechanism.Simulation results show that,compared with several existing methods,our proposed algorithm can perform real-time task offloading in a smart healthcare network in dynamically varying environments and reduce the system task processing cost.展开更多
文摘Since the effectiveness of extracting fault features is not high under traditional bearing fault diagnosis method, a bearing fault diagnosis method based on Deep Auto-encoder Network (DAEN) optimized by Cloud Adaptive Particle Swarm Optimization (CAPSO) was proposed. On the basis of analyzing CAPSO and DAEN, the CAPSO-DAEN fault diagnosis model is built. The model uses the randomness and stability of CAPSO algorithm to optimize the connection weight of DAEN, to reduce the constraints on the weights and extract fault features adaptively. Finally, efficient and accurate fault diagnosis can be implemented with the Softmax classifier. The results of test show that the proposed method has higher diagnostic accuracy and more stable diagnosis results than those based on the DAEN, Support Vector Machine (SVM) and the Back Propagation algorithm (BP) under appropriate parameters.
文摘Big data has ushered in an era of unprecedented access to vast amounts of new,unstructured data,particularly in the realm of sensitive information.It presents unique opportunities for enhancing risk alerting systems,but also poses challenges in terms of extraction and analysis due to its diverse file formats.This paper proposes the utilization of a DAE-based(Deep Auto-encoders)model for projecting risk associated with financial data.The research delves into the development of an indicator assessing the degree to which organizations successfully avoid displaying bias in handling financial information.Simulation results demonstrate the superior performance of the DAE algorithm,showcasing fewer false positives,improved overall detection rates,and a noteworthy 9%reduction in failure jitter.The optimized DAE algorithm achieves an accuracy of 99%,surpassing existing methods,thereby presenting a robust solution for sensitive data risk projection.
基金this project under Geran Putra Inisiatif(GPI)with reference of GP-GPI/2023/976210。
文摘Accurate time synchronization is fundamental to the correct and efficient operation of Wireless Sensor Networks(WSNs),especially in security-critical,time-sensitive applications.However,most existing protocols degrade substantially under malicious interference.We introduce iSTSP,an Intelligent and Secure Time Synchronization Protocol that implements a four-stage defense pipeline to ensure robust,precise synchronization even in hostile environments:(1)trust preprocessing that filters node participation using behavioral trust scoring;(2)anomaly isolation employing a lightweight autoencoder to detect and excise malicious nodes in real time;(3)reliability-weighted consensus that prioritizes high-trust nodes during time aggregation;and(4)convergence-optimized synchronization that dynamically adjusts parameters using theoretical stability bounds.We provide rigorous convergence analysis including a closed-form expression for convergence time,and validate the protocol through both simulations and realworld experiments on a controlled 16-node testbed.Under Sybil attacks with five malicious nodes within this testbed,iSTSP maintains synchronization error increases under 12%and achieves a rapid convergence.Compared to state-ofthe-art protocols like TPSN,SE-FTSP,and MMAR-CTS,iSTSP offers 60%faster detection,broader threat coverage,and more than 7 times lower synchronization error,with a modest 9.3%energy overhead over 8 h.We argue this is an acceptable trade-off for mission-critical deployments requiring guaranteed security.These findings demonstrate iSTSP’s potential as a reliable solution for secure WSN synchronization and motivate future work on large-scale IoT deployments and integration with energy-efficient communication protocols.
文摘Deep neural networks(DNNs)are effective in solving both forward and inverse problems for nonlinear partial differential equations(PDEs).However,conventional DNNs are not effective in handling problems such as delay differential equations(DDEs)and delay integrodifferential equations(DIDEs)with constant delays,primarily due to their low regularity at delayinduced breaking points.In this paper,a DNN method that combines multi-task learning(MTL)which is proposed to solve both the forward and inverse problems of DIDEs.The core idea of this approach is to divide the original equation into multiple tasks based on the delay,using auxiliary outputs to represent the integral terms,followed by the use of MTL to seamlessly incorporate the properties at the breaking points into the loss function.Furthermore,given the increased training dificulty associated with multiple tasks and outputs,we employ a sequential training scheme to reduce training complexity and provide reference solutions for subsequent tasks.This approach significantly enhances the approximation accuracy of solving DIDEs with DNNs,as demonstrated by comparisons with traditional DNN methods.We validate the effectiveness of this method through several numerical experiments,test various parameter sharing structures in MTL and compare the testing results of these structures.Finally,this method is implemented to solve the inverse problem of nonlinear DIDE and the results show that the unknown parameters of DIDE can be discovered with sparse or noisy data.
基金supported by the National Key R&D Program of China(Grant No.:2022YFC3501805)the National Natural Science Foundation of China(Grant No.:82374030)+2 种基金the Science and Technology Program of Tianjin in China(Grant No.:23ZYJDSS00030)the Tianjin Outstanding Youth Fund,China(Grant No.:23JCJQJC00030)the China Postdoctoral Science Foundation-Tianjin Joint Support Program(Grant No.:2023T030TJ).
文摘Metabolomics covers a wide range of applications in life sciences,biomedicine,and phytology.Data acquisition(to achieve high coverage and efficiency)and analysis(to pursue good classification)are two key segments involved in metabolomics workflows.Various chemometric approaches utilizing either pattern recognition or machine learning have been employed to separate different groups.However,insufficient feature extraction,inappropriate feature selection,overfitting,or underfitting lead to an insufficient capacity to discriminate plants that are often easily confused.Using two ginseng varieties,namely Panax japonicus(PJ)and Panax japonicus var.major(PJvm),containing the similar ginsenosides,we integrated pseudo-targeted metabolomics and deep neural network(DNN)modeling to achieve accurate species differentiation.A pseudo-targeted metabolomics approach was optimized through data acquisition mode,ion pairs generation,comparison between multiple reaction monitoring(MRM)and scheduled MRM(sMRM),and chromatographic elution gradient.In total,1980 ion pairs were monitored within 23 min,allowing for the most comprehensive ginseng metabolome analysis.The established DNN model demonstrated excellent classification performance(in terms of accuracy,precision,recall,F1 score,area under the curve,and receiver operating characteristic(ROC))using the entire metabolome data and feature-selection dataset,exhibiting superior advantages over random forest(RF),support vector machine(SVM),extreme gradient boosting(XGBoost),and multilayer perceptron(MLP).Moreover,DNNs were advantageous for automated feature learning,nonlinear modeling,adaptability,and generalization.This study confirmed practicality of the established strategy for efficient metabolomics data analysis and reliable classification performance even when using small-volume samples.This established approach holds promise for plant metabolomics and is not limited to ginseng.
基金Nourah bint Abdulrahman University for funding this project through the Researchers Supporting Project(PNURSP2025R319)Riyadh,Saudi Arabia and Prince Sultan University for covering the article processing charges(APC)associated with this publication.Special acknowledgement to Automated Systems&Soft Computing Lab(ASSCL),Prince Sultan University,Riyadh,Saudi Arabia.
文摘The growing incidence of cyberattacks necessitates a robust and effective Intrusion Detection Systems(IDS)for enhanced network security.While conventional IDSs can be unsuitable for detecting different and emerging attacks,there is a demand for better techniques to improve detection reliability.This study introduces a new method,the Deep Adaptive Multi-Layer Attention Network(DAMLAN),to boost the result of intrusion detection on network data.Due to its multi-scale attention mechanisms and graph features,DAMLAN aims to address both known and unknown intrusions.The real-world NSL-KDD dataset,a popular choice among IDS researchers,is used to assess the proposed model.There are 67,343 normal samples and 58,630 intrusion attacks in the training set,12,833 normal samples,and 9711 intrusion attacks in the test set.Thus,the proposed DAMLAN method is more effective than the standard models due to the consideration of patterns by the attention layers.The experimental performance of the proposed model demonstrates that it achieves 99.26%training accuracy and 90.68%testing accuracy,with precision reaching 98.54%on the training set and 96.64%on the testing set.The recall and F1 scores again support the model with training set values of 99.90%and 99.21%and testing set values of 86.65%and 91.37%.These results provide a strong basis for the claims made regarding the model’s potential to identify intrusion attacks and affirm its relatively strong overall performance,irrespective of type.Future work would employ more attempts to extend the scalability and applicability of DAMLAN for real-time use in intrusion detection systems.
基金supported by the Intelligent System Research Group(ISysRG)supported by Universitas Sriwijaya funded by the Competitive Research 2024.
文摘Handling missing data accurately is critical in clinical research, where data quality directly impacts decision-making and patient outcomes. While deep learning (DL) techniques for data imputation have gained attention, challenges remain, especially when dealing with diverse data types. In this study, we introduce a novel data imputation method based on a modified convolutional neural network, specifically, a Deep Residual-Convolutional Neural Network (DRes-CNN) architecture designed to handle missing values across various datasets. Our approach demonstrates substantial improvements over existing imputation techniques by leveraging residual connections and optimized convolutional layers to capture complex data patterns. We evaluated the model on publicly available datasets, including Medical Information Mart for Intensive Care (MIMIC-III and MIMIC-IV), which contain critical care patient data, and the Beijing Multi-Site Air Quality dataset, which measures environmental air quality. The proposed DRes-CNN method achieved a root mean square error (RMSE) of 0.00006, highlighting its high accuracy and robustness. We also compared with Low Light-Convolutional Neural Network (LL-CNN) and U-Net methods, which had RMSE values of 0.00075 and 0.00073, respectively. This represented an improvement of approximately 92% over LL-CNN and 91% over U-Net. The results showed that this DRes-CNN-based imputation method outperforms current state-of-the-art models. These results established DRes-CNN as a reliable solution for addressing missing data.
基金supported by the National Key Scientific Instrument and Equipment Development Project(61827801).
文摘The fifth-generation (5G) communication requires a highly accurate estimation of the channel state information (CSI)to take advantage of the massive multiple-input multiple-output(MIMO) system. However, traditional channel estimation methods do not always yield reliable estimates. The methodology of this paper consists of deep residual shrinkage network (DRSN)neural network-based method that is used to solve this problem.Thus, the channel estimation approach, based on DRSN with its learning ability of noise-containing data, is first introduced. Then,the DRSN is used to train the noise reduction process based on the results of the least square (LS) channel estimation while applying the pilot frequency subcarriers, where the initially estimated subcarrier channel matrix is considered as a three-dimensional tensor of the DRSN input. Afterward, a mixed signal to noise ratio (SNR) training data strategy is proposed based on the learning ability of DRSN under different SNRs. Moreover, a joint mixed scenario training strategy is carried out to test the multi scenarios robustness of DRSN. As for the findings, the numerical results indicate that the DRSN method outperforms the spatial-frequency-temporal convolutional neural networks (SF-CNN)with similar computational complexity and achieves better advantages in the full SNR range than the minimum mean squared error (MMSE) estimator with a limited dataset. Moreover, the DRSN approach shows robustness in different propagation environments.
基金supported by the Chinese Academy of Science"Light of West China"Program(2022-XBQNXZ-015)the National Natural Science Foundation of China(11903071)the Operation,Maintenance and Upgrading Fund for Astronomical Telescopes and Facility Instruments,budgeted from the Ministry of Finance of China and administered by the Chinese Academy of Sciences。
文摘This paper addresses the performance degradation issue in a fast radio burst search pipeline based on deep learning.This issue is caused by the class imbalance of the radio frequency interference samples in the training dataset,and one solution is applied to improve the distribution of the training data by augmenting minority class samples using a deep convolutional generative adversarial network.Experi.mental results demonstrate that retraining the deep learning model with the newly generated dataset leads to a new fast radio burst classifier,which effectively reduces false positives caused by periodic wide-band impulsive radio frequency interference,thereby enhancing the performance of the search pipeline.
基金supported by the National Key Research and Development Program of China No.2023YFB2705000.
文摘With the rise of encrypted traffic,traditional network analysis methods have become less effective,leading to a shift towards deep learning-based approaches.Among these,multimodal learning-based classification methods have gained attention due to their ability to leverage diverse feature sets from encrypted traffic,improving classification accuracy.However,existing research predominantly relies on late fusion techniques,which hinder the full utilization of deep features within the data.To address this limitation,we propose a novel multimodal encrypted traffic classification model that synchronizes modality fusion with multiscale feature extraction.Specifically,our approach performs real-time fusion of modalities at each stage of feature extraction,enhancing feature representation at each level and preserving inter-level correlations for more effective learning.This continuous fusion strategy improves the model’s ability to detect subtle variations in encrypted traffic,while boosting its robustness and adaptability to evolving network conditions.Experimental results on two real-world encrypted traffic datasets demonstrate that our method achieves a classification accuracy of 98.23% and 97.63%,outperforming existing multimodal learning-based methods.
基金University of Jeddah,Jeddah,Saudi Arabia,grant No.(UJ-23-SRP-10).
文摘Electric vehicles(EVs)are gradually being deployed in the transportation sector.Although they have a high impact on reducing greenhouse gas emissions,their penetration is challenged by their random energy demand and difficult scheduling of their optimal charging.To cope with these problems,this paper presents a novel approach for photovoltaic grid-connected microgrid EV charging station energy demand forecasting.The present study is part of a comprehensive framework involving emerging technologies such as drones and artificial intelligence designed to support the EVs’charging scheduling task.By using predictive algorithms for solar generation and load demand estimation,this approach aimed at ensuring dynamic and efficient energy flow between the solar energy source,the grid and the electric vehicles.The main contribution of this paper lies in developing an intelligent approach based on deep recurrent neural networks to forecast the energy demand using only its previous records.Therefore,various forecasters based on Long Short-term Memory,Gated Recurrent Unit,and their bi-directional and stacked variants were investigated using a real dataset collected from an EV charging station located at Trieste University(Italy).The developed forecasters have been evaluated and compared according to different metrics,including R,RMSE,MAE,and MAPE.We found that the obtained R values for both PV power generation and energy demand ranged between 97%and 98%.These study findings can be used for reliable and efficient decision-making on the management side of the optimal scheduling of the charging operations.
文摘With the rapid advancement of mobile communication networks,key technologies such as Multi-access Edge Computing(MEC)and Network Function Virtualization(NFV)have enhanced the quality of service for 5G users but have also significantly increased the complexity of network threats.Traditional static defense mechanisms are inadequate for addressing the dynamic and heterogeneous nature of modern attack vectors.To overcome these challenges,this paper presents a novel algorithmic framework,SD-5G,designed for high-precision intrusion detection in 5G environments.SD-5G adopts a three-stage architecture comprising traffic feature extraction,elastic representation,and adaptive classification.Specifically,an enhanced Concrete Autoencoder(CAE)is employed to reconstruct and compress high-dimensional network traffic features,producing compact and expressive representations suitable for large-scale 5G deployments.To further improve accuracy in ambiguous traffic classification,a Residual Convolutional Long Short-Term Memory model with an attention mechanism(ResCLA)is introduced,enabling multi-level modeling of spatial–temporal dependencies and effective detection of subtle anomalies.Extensive experiments on benchmark datasets—including 5G-NIDD,CIC-IDS2017,ToN-IoT,and BoT-IoT—demonstrate that SD-5G consistently achieves F1 scores exceeding 99.19%across diverse network environments,indicating strong generalization and real-time deployment capabilities.Overall,SD-5G achieves a balance between detection accuracy and deployment efficiency,offering a scalable,flexible,and effective solution for intrusion detection in 5G and next-generation networks.
基金supported by the Basic Science Research Program through the National Research Foundation of Korea(NRF)funded by the Ministry of Education(No.2021R1F1A1055408)supported by the Researchers Supporting Project Number(MHIRSP2024005)Almaarefa University,Riyadh,Saudi Arabia.
文摘Fundoscopic diagnosis involves assessing the proper functioning of the eye’s nerves,blood vessels,retinal health,and the impact of diabetes on the optic nerves.Fundus disorders are a major global health concern,affecting millions of people worldwide due to their widespread occurrence.Fundus photography generates machine-based eye images that assist in diagnosing and treating ocular diseases such as diabetic retinopathy.As a result,accurate fundus detection is essential for early diagnosis and effective treatment,helping to prevent severe complications and improve patient outcomes.To address this need,this article introduces a Derivative Model for Fundus Detection using Deep NeuralNetworks(DMFD-DNN)to enhance diagnostic precision.Thismethod selects key features for fundus detection using the least derivative,which identifies features correlating with stored fundus images.Feature filtering relies on the minimum derivative,determined by extracting both similar and varying textures.In this research,the DNN model was integrated with the derivative model.Fundus images were segmented,features were extracted,and the DNN was iteratively trained to identify fundus regions reliably.The goal was to improve the precision of fundoscopic diagnosis by training the DNN incrementally,taking into account the least possible derivative across iterations,and using outputs from previous cycles.The hidden layer of the neural network operates on the most significant derivative,which may reduce precision across iterations.These derivatives are treated as inaccurate,and the model is subsequently trained using selective features and their corresponding extractions.The proposed model outperforms previous techniques in detecting fundus regions,achieving 94.98%accuracy and 91.57%sensitivity,with a minimal error rate of 5.43%.It significantly reduces feature extraction time to 1.462 s and minimizes computational overhead,thereby improving operational efficiency and scalability.Ultimately,the proposed model enhances diagnostic precision and reduces errors,leading to more effective fundus dysfunction diagnosis and treatment.
基金supported in part by the National Natural Science Foundation of China under Grants No.62372087 and No.62072076the Research Fund of State Key Laboratory of Processors under Grant No.CLQ202310the CSC scholarship.
文摘Deep neural networks(DNNs)have found extensive applications in safety-critical artificial intelligence systems,such as autonomous driving and facial recognition systems.However,recent research has revealed their susceptibility to backdoors maliciously injected by adversaries.This vulnerability arises due to the intricate architecture and opacity of DNNs,resulting in numerous redundant neurons embedded within the models.Adversaries exploit these vulnerabilities to conceal malicious backdoor information within DNNs,thereby causing erroneous outputs and posing substantial threats to the efficacy of DNN-based applications.This article presents a comprehensive survey of backdoor attacks against DNNs and the countermeasure methods employed to mitigate them.Initially,we trace the evolution of the concept from traditional backdoor attacks to backdoor attacks against DNNs,highlighting the feasibility and practicality of generating backdoor attacks against DNNs.Subsequently,we provide an overview of notable works encompassing various attack and defense strategies,facilitating a comparative analysis of their approaches.Through these discussions,we offer constructive insights aimed at refining these techniques.Finally,we extend our research perspective to the domain of large language models(LLMs)and synthesize the characteristics and developmental trends of backdoor attacks and defense methods targeting LLMs.Through a systematic review of existing studies on backdoor vulnerabilities in LLMs,we identify critical open challenges in this field and propose actionable directions for future research.
文摘With the increasing importance of supply chain transparency,blockchain-based data has emerged as a valuable and verifiable source for analyzing procurement transaction risks.This study extends the mathematical model and proof of‘the Overall Performance Characteristics of the Supply Chain’to encompass multiple variables within blockchain data.Utilizing graph theory,the model is further developed into a single-layer neural network,which serves as the foundation for constructing two multi-layer deep learning neural network models,Feedforward Neural Network(abbreviated as FNN)and Deep Clustering Network(abbreviated as DCN).Furthermore,this study retrieves corporate data from the Chunghwa Yellow Pages online resource and Taiwan Economic Journal database(abbreviated as TEJ).These data are then virtualized using‘the Metaverse Algorithm’,and the selected virtualized blockchain variables are utilized to train a neural network model for classification.The results demonstrate that a single-layer neural network model,leveraging blockchain data and employing the Proof of Relation algorithm(abbreviated as PoR)as the activation function,effectively identifies anomalous enterprises,which constitute 7.2%of the total sample,aligning with expectations.In contrast,the multi-layer neural network models,DCN and FNN,classify an excessively large proportion of enterprises as anomalous(ranging from one-fourth to one-third),which deviates from expectations.This indicates that deep learning may still be inadequate in effectively capturing or identifying malicious corporate behaviors associated with distortions in procurement transaction data.In other words,procurement transaction blockchain data possesses intrinsic value that cannot be replaced by artificial intelligence(abbreviated as AI).
文摘Underwater wireless sensor networks(UWSNs)have emerged as a new paradigm of real-time organized systems,which are utilized in a diverse array of scenarios to manage the underwater environment surrounding them.One of the major challenges that these systems confront is topology control via clustering,which reduces the overload of wireless communications within a network and ensures low energy consumption and good scalability.This study aimed to present a clustering technique in which the clustering process and cluster head(CH)selection are performed based on the Markov decision process and deep reinforcement learning(DRL).DRL algorithm selects the CH by maximizing the defined reward function.Subsequently,the sensed data are collected by the CHs and then sent to the autonomous underwater vehicles.In the final phase,the consumed energy by each sensor is calculated,and its residual energy is updated.Then,the autonomous underwater vehicle performs all clustering and CH selection operations.This procedure persists until the point of cessation when the sensor’s power has been reduced to such an extent that no node can become a CH.Through analysis of the findings from this investigation and their comparison with alternative frameworks,the implementation of this method can be used to control the cluster size and the number of CHs,which ultimately augments the energy usage of nodes and prolongs the lifespan of the network.Our simulation results illustrate that the suggested methodology surpasses the conventional low-energy adaptive clustering hierarchy,the distance-and energy-constrained K-means clustering scheme,and the vector-based forward protocol and is viable for deployment in an actual operational environment.
基金supported via funding from Prince Sattam bin Abdulaziz University(PSAU/2025/R/1446)Princess Nourah bint Abdulrahman University(PNURSP2025R300)Prince Sultan University.
文摘Deep neural networks provide accurate results for most applications.However,they need a big dataset to train properly.Providing a big dataset is a significant challenge in most applications.Image augmentation refers to techniques that increase the amount of image data.Common operations for image augmentation include changes in illumination,rotation,contrast,size,viewing angle,and others.Recently,Generative Adversarial Networks(GANs)have been employed for image generation.However,like image augmentation methods,GAN approaches can only generate images that are similar to the original images.Therefore,they also cannot generate new classes of data.Texture images presentmore challenges than general images,and generating textures is more complex than creating other types of images.This study proposes a gradient-based deep neural network method that generates a new class of texture.It is possible to rapidly generate new classes of textures using different kernels from pre-trained deep networks.After generating new textures for each class,the number of textures increases through image augmentation.During this process,several techniques are proposed to automatically remove incomplete and similar textures that are created.The proposed method is faster than some well-known generative networks by around 4 to 10 times.In addition,the quality of the generated textures surpasses that of these networks.The proposed method can generate textures that surpass those of someGANs and parametric models in certain image qualitymetrics.It can provide a big texture dataset to train deep networks.A new big texture dataset is created artificially using the proposed method.This dataset is approximately 2 GB in size and comprises 30,000 textures,each 150×150 pixels in size,organized into 600 classes.It is uploaded to the Kaggle site and Google Drive.This dataset is called BigTex.Compared to other texture datasets,the proposed dataset is the largest and can serve as a comprehensive texture dataset for training more powerful deep neural networks and mitigating overfitting.
基金Project ZR2023MF111 supported by Shandong Provincial Natural Science Foundation。
文摘With the advancement of Vehicle-to-Everything(V2X)technology,efficient resource allocation in dynamic vehicular networks has become a critical challenge for achieving optimal performance.Existing methods suffer from high computational complexity and decision latency under high-density traffic and heterogeneous network conditions.To address these challenges,this study presents an innovative framework that combines Graph Neural Networks(GNNs)with a Double Deep Q-Network(DDQN),utilizing dynamic graph structures and reinforcement learning.An adaptive neighbor sampling mechanism is introduced to dynamically select the most relevant neighbors based on interference levels and network topology,thereby improving decision accuracy and efficiency.Meanwhile,the framework models communication links as nodes and interference relationships as edges,effectively capturing the direct impact of interference on resource allocation while reducing computational complexity and preserving critical interaction information.Employing an aggregation mechanism based on the Graph Attention Network(GAT),it dynamically adjusts the neighbor sampling scope and performs attention-weighted aggregation based on node importance,ensuring more efficient and adaptive resource management.This design ensures reliable Vehicle-to-Vehicle(V2V)communication while maintaining high Vehicle-to-Infrastructure(V2I)throughput.The framework retains the global feature learning capabilities of GNNs and supports distributed network deployment,allowing vehicles to extract low-dimensional graph embeddings from local observations for real-time resource decisions.Experimental results demonstrate that the proposed method significantly reduces computational overhead,mitigates latency,and improves resource utilization efficiency in vehicular networks under complex traffic scenarios.This research not only provides a novel solution to resource allocation challenges in V2X networks but also advances the application of DDQN in intelligent transportation systems,offering substantial theoretical significance and practical value.
基金supported in part by the National Natural Science Foundation of China under Grant 62371181in part by the Changzhou Science and Technology International Cooperation Program under Grant CZ20230029+1 种基金supported by the National Research Foundation of Korea(NRF)grant funded by the Korea government.(MSIT)(2021R1A2B5B02087169)supported by the MSIT(Ministry of Science and ICT),Korea,under the ITRC(Information Technology Research Center)support program(RS-202300259004)supervised by the IITP(Institute for Information&Communications Technology Planning&Evaluation)。
文摘Significant breakthroughs in the Internet of Things(IoT)and 5G technologies have driven several smart healthcare activities,leading to a flood of computationally intensive applications in smart healthcare networks.Mobile Edge Computing(MEC)is considered as an efficient solution to provide powerful computing capabilities to latency or energy sensitive nodes.The low-latency and high-reliability requirements of healthcare application services can be met through optimal offloading and resource allocation for the computational tasks of the nodes.In this study,we established a system model consisting of two types of nodes by considering nondivisible and trade-off computational tasks between latency and energy consumption.To minimize processing cost of the system tasks,a Mixed-Integer Nonlinear Programming(MINLP)task offloading problem is proposed.Furthermore,this problem is decomposed into task offloading decisions and resource allocation problems.The resource allocation problem is solved using traditional optimization algorithms,and the offloading decision problem is solved using a deep reinforcement learning algorithm.We propose an Online Offloading based on the Deep Reinforcement Learning(OO-DRL)algorithm with parallel deep neural networks and a weightsensitive experience replay mechanism.Simulation results show that,compared with several existing methods,our proposed algorithm can perform real-time task offloading in a smart healthcare network in dynamically varying environments and reduce the system task processing cost.