期刊文献+
共找到21,018篇文章
< 1 2 250 >
每页显示 20 50 100
Application of Improved Deep Auto-Encoder Network in Rolling Bearing Fault Diagnosis 被引量:1
1
作者 Jian Di Leilei Wang 《Journal of Computer and Communications》 2018年第7期41-53,共13页
Since the effectiveness of extracting fault features is not high under traditional bearing fault diagnosis method, a bearing fault diagnosis method based on Deep Auto-encoder Network (DAEN) optimized by Cloud Adaptive... Since the effectiveness of extracting fault features is not high under traditional bearing fault diagnosis method, a bearing fault diagnosis method based on Deep Auto-encoder Network (DAEN) optimized by Cloud Adaptive Particle Swarm Optimization (CAPSO) was proposed. On the basis of analyzing CAPSO and DAEN, the CAPSO-DAEN fault diagnosis model is built. The model uses the randomness and stability of CAPSO algorithm to optimize the connection weight of DAEN, to reduce the constraints on the weights and extract fault features adaptively. Finally, efficient and accurate fault diagnosis can be implemented with the Softmax classifier. The results of test show that the proposed method has higher diagnostic accuracy and more stable diagnosis results than those based on the DAEN, Support Vector Machine (SVM) and the Back Propagation algorithm (BP) under appropriate parameters. 展开更多
关键词 Fault Diagnosis ROLLING BEARING deep auto-encoder network CAPSO Algorithm Feature Extraction
暂未订购
A Deep Auto-encoder Based Security Mechanism for Protecting Sensitive Data Using AI Based Risk Assessment
2
作者 Lavanya M Mangayarkarasi S 《Journal of Harbin Institute of Technology(New Series)》 2025年第4期90-98,共9页
Big data has ushered in an era of unprecedented access to vast amounts of new,unstructured data,particularly in the realm of sensitive information.It presents unique opportunities for enhancing risk alerting systems,b... Big data has ushered in an era of unprecedented access to vast amounts of new,unstructured data,particularly in the realm of sensitive information.It presents unique opportunities for enhancing risk alerting systems,but also poses challenges in terms of extraction and analysis due to its diverse file formats.This paper proposes the utilization of a DAE-based(Deep Auto-encoders)model for projecting risk associated with financial data.The research delves into the development of an indicator assessing the degree to which organizations successfully avoid displaying bias in handling financial information.Simulation results demonstrate the superior performance of the DAE algorithm,showcasing fewer false positives,improved overall detection rates,and a noteworthy 9%reduction in failure jitter.The optimized DAE algorithm achieves an accuracy of 99%,surpassing existing methods,thereby presenting a robust solution for sensitive data risk projection. 展开更多
关键词 data mining sensitive data deep auto-encoders
在线阅读 下载PDF
Deep Auto-Encoder Based Intelligent and Secure Time Synchronization Protocol(iSTSP)for Security-Critical Time-Sensitive WSNs
3
作者 Ramadan Abdul-Rashid Mohd Amiruddin Abd Rahman Abdulaziz Yagoub Barnawi 《Computer Modeling in Engineering & Sciences》 2025年第9期3213-3250,共38页
Accurate time synchronization is fundamental to the correct and efficient operation of Wireless Sensor Networks(WSNs),especially in security-critical,time-sensitive applications.However,most existing protocols degrade... Accurate time synchronization is fundamental to the correct and efficient operation of Wireless Sensor Networks(WSNs),especially in security-critical,time-sensitive applications.However,most existing protocols degrade substantially under malicious interference.We introduce iSTSP,an Intelligent and Secure Time Synchronization Protocol that implements a four-stage defense pipeline to ensure robust,precise synchronization even in hostile environments:(1)trust preprocessing that filters node participation using behavioral trust scoring;(2)anomaly isolation employing a lightweight autoencoder to detect and excise malicious nodes in real time;(3)reliability-weighted consensus that prioritizes high-trust nodes during time aggregation;and(4)convergence-optimized synchronization that dynamically adjusts parameters using theoretical stability bounds.We provide rigorous convergence analysis including a closed-form expression for convergence time,and validate the protocol through both simulations and realworld experiments on a controlled 16-node testbed.Under Sybil attacks with five malicious nodes within this testbed,iSTSP maintains synchronization error increases under 12%and achieves a rapid convergence.Compared to state-ofthe-art protocols like TPSN,SE-FTSP,and MMAR-CTS,iSTSP offers 60%faster detection,broader threat coverage,and more than 7 times lower synchronization error,with a modest 9.3%energy overhead over 8 h.We argue this is an acceptable trade-off for mission-critical deployments requiring guaranteed security.These findings demonstrate iSTSP’s potential as a reliable solution for secure WSN synchronization and motivate future work on large-scale IoT deployments and integration with energy-efficient communication protocols. 展开更多
关键词 Time-sensitive wireless sensor networks(TS-WSNs) secure time synchronization protocol trust-based authentication autoencoder model deep learning malicious node detection Internet of Things energyefficient communication protocols
在线阅读 下载PDF
改进Deep Q Networks的交通信号均衡调度算法
4
作者 贺道坤 《机械设计与制造》 北大核心 2025年第4期135-140,共6页
为进一步缓解城市道路高峰时段十字路口的交通拥堵现象,实现路口各道路车流均衡通过,基于改进Deep Q Networks提出了一种的交通信号均衡调度算法。提取十字路口与交通信号调度最相关的特征,分别建立单向十字路口交通信号模型和线性双向... 为进一步缓解城市道路高峰时段十字路口的交通拥堵现象,实现路口各道路车流均衡通过,基于改进Deep Q Networks提出了一种的交通信号均衡调度算法。提取十字路口与交通信号调度最相关的特征,分别建立单向十字路口交通信号模型和线性双向十字路口交通信号模型,并基于此构建交通信号调度优化模型;针对Deep Q Networks算法在交通信号调度问题应用中所存在的收敛性、过估计等不足,对Deep Q Networks进行竞争网络改进、双网络改进以及梯度更新策略改进,提出相适应的均衡调度算法。通过与经典Deep Q Networks仿真比对,验证论文算法对交通信号调度问题的适用性和优越性。基于城市道路数据,分别针对两种场景进行仿真计算,仿真结果表明该算法能够有效缩减十字路口车辆排队长度,均衡各路口车流通行量,缓解高峰出行方向的道路拥堵现象,有利于十字路口交通信号调度效益的提升。 展开更多
关键词 交通信号调度 十字路口 deep Q networks 深度强化学习 智能交通
在线阅读 下载PDF
DEEP NEURAL NETWORKS COMBINING MULTI-TASK LEARNING FOR SOLVING DELAY INTEGRO-DIFFERENTIAL EQUATIONS 被引量:1
5
作者 WANG Chen-yao SHI Feng 《数学杂志》 2025年第1期13-38,共26页
Deep neural networks(DNNs)are effective in solving both forward and inverse problems for nonlinear partial differential equations(PDEs).However,conventional DNNs are not effective in handling problems such as delay di... Deep neural networks(DNNs)are effective in solving both forward and inverse problems for nonlinear partial differential equations(PDEs).However,conventional DNNs are not effective in handling problems such as delay differential equations(DDEs)and delay integrodifferential equations(DIDEs)with constant delays,primarily due to their low regularity at delayinduced breaking points.In this paper,a DNN method that combines multi-task learning(MTL)which is proposed to solve both the forward and inverse problems of DIDEs.The core idea of this approach is to divide the original equation into multiple tasks based on the delay,using auxiliary outputs to represent the integral terms,followed by the use of MTL to seamlessly incorporate the properties at the breaking points into the loss function.Furthermore,given the increased training dificulty associated with multiple tasks and outputs,we employ a sequential training scheme to reduce training complexity and provide reference solutions for subsequent tasks.This approach significantly enhances the approximation accuracy of solving DIDEs with DNNs,as demonstrated by comparisons with traditional DNN methods.We validate the effectiveness of this method through several numerical experiments,test various parameter sharing structures in MTL and compare the testing results of these structures.Finally,this method is implemented to solve the inverse problem of nonlinear DIDE and the results show that the unknown parameters of DIDE can be discovered with sparse or noisy data. 展开更多
关键词 Delay integro-differential equation Multi-task learning parameter sharing structure deep neural network sequential training scheme
在线阅读 下载PDF
Integration of deep neural network modeling and LC-MS-based pseudo-targeted metabolomics to discriminate easily confused ginseng species 被引量:1
6
作者 Meiting Jiang Yuyang Sha +8 位作者 Yadan Zou Xiaoyan Xu Mengxiang Ding Xu Lian Hongda Wang Qilong Wang Kefeng Li De-an Guo Wenzhi Yang 《Journal of Pharmaceutical Analysis》 2025年第1期126-137,共12页
Metabolomics covers a wide range of applications in life sciences,biomedicine,and phytology.Data acquisition(to achieve high coverage and efficiency)and analysis(to pursue good classification)are two key segments invo... Metabolomics covers a wide range of applications in life sciences,biomedicine,and phytology.Data acquisition(to achieve high coverage and efficiency)and analysis(to pursue good classification)are two key segments involved in metabolomics workflows.Various chemometric approaches utilizing either pattern recognition or machine learning have been employed to separate different groups.However,insufficient feature extraction,inappropriate feature selection,overfitting,or underfitting lead to an insufficient capacity to discriminate plants that are often easily confused.Using two ginseng varieties,namely Panax japonicus(PJ)and Panax japonicus var.major(PJvm),containing the similar ginsenosides,we integrated pseudo-targeted metabolomics and deep neural network(DNN)modeling to achieve accurate species differentiation.A pseudo-targeted metabolomics approach was optimized through data acquisition mode,ion pairs generation,comparison between multiple reaction monitoring(MRM)and scheduled MRM(sMRM),and chromatographic elution gradient.In total,1980 ion pairs were monitored within 23 min,allowing for the most comprehensive ginseng metabolome analysis.The established DNN model demonstrated excellent classification performance(in terms of accuracy,precision,recall,F1 score,area under the curve,and receiver operating characteristic(ROC))using the entire metabolome data and feature-selection dataset,exhibiting superior advantages over random forest(RF),support vector machine(SVM),extreme gradient boosting(XGBoost),and multilayer perceptron(MLP).Moreover,DNNs were advantageous for automated feature learning,nonlinear modeling,adaptability,and generalization.This study confirmed practicality of the established strategy for efficient metabolomics data analysis and reliable classification performance even when using small-volume samples.This established approach holds promise for plant metabolomics and is not limited to ginseng. 展开更多
关键词 Liquid chromatography-mass spectrometry Pseudo-targeted metabolomics deep neural network Species differentiation GINSENG
在线阅读 下载PDF
Intrusion Detection Model on Network Data with Deep Adaptive Multi-Layer Attention Network(DAMLAN)
7
作者 Fatma S.Alrayes Syed Umar Amin +2 位作者 Nada Ali Hakami Mohammed K.Alzaylaee Tariq Kashmeery 《Computer Modeling in Engineering & Sciences》 2025年第7期581-614,共34页
The growing incidence of cyberattacks necessitates a robust and effective Intrusion Detection Systems(IDS)for enhanced network security.While conventional IDSs can be unsuitable for detecting different and emerging at... The growing incidence of cyberattacks necessitates a robust and effective Intrusion Detection Systems(IDS)for enhanced network security.While conventional IDSs can be unsuitable for detecting different and emerging attacks,there is a demand for better techniques to improve detection reliability.This study introduces a new method,the Deep Adaptive Multi-Layer Attention Network(DAMLAN),to boost the result of intrusion detection on network data.Due to its multi-scale attention mechanisms and graph features,DAMLAN aims to address both known and unknown intrusions.The real-world NSL-KDD dataset,a popular choice among IDS researchers,is used to assess the proposed model.There are 67,343 normal samples and 58,630 intrusion attacks in the training set,12,833 normal samples,and 9711 intrusion attacks in the test set.Thus,the proposed DAMLAN method is more effective than the standard models due to the consideration of patterns by the attention layers.The experimental performance of the proposed model demonstrates that it achieves 99.26%training accuracy and 90.68%testing accuracy,with precision reaching 98.54%on the training set and 96.64%on the testing set.The recall and F1 scores again support the model with training set values of 99.90%and 99.21%and testing set values of 86.65%and 91.37%.These results provide a strong basis for the claims made regarding the model’s potential to identify intrusion attacks and affirm its relatively strong overall performance,irrespective of type.Future work would employ more attempts to extend the scalability and applicability of DAMLAN for real-time use in intrusion detection systems. 展开更多
关键词 Intrusion detection deep adaptive networks multi-layer attention DAMLAN network security anomaly detection
在线阅读 下载PDF
A Modified Deep Residual-Convolutional Neural Network for Accurate Imputation of Missing Data
8
作者 Firdaus Firdaus Siti Nurmaini +8 位作者 Anggun Islami Annisa Darmawahyuni Ade Iriani Sapitri Muhammad Naufal Rachmatullah Bambang Tutuko Akhiar Wista Arum Muhammad Irfan Karim Yultrien Yultrien Ramadhana Noor Salassa Wandya 《Computers, Materials & Continua》 2025年第2期3419-3441,共23页
Handling missing data accurately is critical in clinical research, where data quality directly impacts decision-making and patient outcomes. While deep learning (DL) techniques for data imputation have gained attentio... Handling missing data accurately is critical in clinical research, where data quality directly impacts decision-making and patient outcomes. While deep learning (DL) techniques for data imputation have gained attention, challenges remain, especially when dealing with diverse data types. In this study, we introduce a novel data imputation method based on a modified convolutional neural network, specifically, a Deep Residual-Convolutional Neural Network (DRes-CNN) architecture designed to handle missing values across various datasets. Our approach demonstrates substantial improvements over existing imputation techniques by leveraging residual connections and optimized convolutional layers to capture complex data patterns. We evaluated the model on publicly available datasets, including Medical Information Mart for Intensive Care (MIMIC-III and MIMIC-IV), which contain critical care patient data, and the Beijing Multi-Site Air Quality dataset, which measures environmental air quality. The proposed DRes-CNN method achieved a root mean square error (RMSE) of 0.00006, highlighting its high accuracy and robustness. We also compared with Low Light-Convolutional Neural Network (LL-CNN) and U-Net methods, which had RMSE values of 0.00075 and 0.00073, respectively. This represented an improvement of approximately 92% over LL-CNN and 91% over U-Net. The results showed that this DRes-CNN-based imputation method outperforms current state-of-the-art models. These results established DRes-CNN as a reliable solution for addressing missing data. 展开更多
关键词 Data imputation missing data deep learning deep residual convolutional neural network
在线阅读 下载PDF
Deep residual systolic network for massive MIMO channel estimation by joint training strategies of mixed-SNR and mixed-scenarios
9
作者 SUN Meng JING Qingfeng ZHONG Weizhi 《Journal of Systems Engineering and Electronics》 2025年第4期903-913,共11页
The fifth-generation (5G) communication requires a highly accurate estimation of the channel state information (CSI)to take advantage of the massive multiple-input multiple-output(MIMO) system. However, traditional ch... The fifth-generation (5G) communication requires a highly accurate estimation of the channel state information (CSI)to take advantage of the massive multiple-input multiple-output(MIMO) system. However, traditional channel estimation methods do not always yield reliable estimates. The methodology of this paper consists of deep residual shrinkage network (DRSN)neural network-based method that is used to solve this problem.Thus, the channel estimation approach, based on DRSN with its learning ability of noise-containing data, is first introduced. Then,the DRSN is used to train the noise reduction process based on the results of the least square (LS) channel estimation while applying the pilot frequency subcarriers, where the initially estimated subcarrier channel matrix is considered as a three-dimensional tensor of the DRSN input. Afterward, a mixed signal to noise ratio (SNR) training data strategy is proposed based on the learning ability of DRSN under different SNRs. Moreover, a joint mixed scenario training strategy is carried out to test the multi scenarios robustness of DRSN. As for the findings, the numerical results indicate that the DRSN method outperforms the spatial-frequency-temporal convolutional neural networks (SF-CNN)with similar computational complexity and achieves better advantages in the full SNR range than the minimum mean squared error (MMSE) estimator with a limited dataset. Moreover, the DRSN approach shows robustness in different propagation environments. 展开更多
关键词 massive multiple-input multiple-output(MIMO) channel estimation deep residual shrinkage network(DRSN) deep convolutional neural network(CNN).
在线阅读 下载PDF
Handling class imbalance of radio frequency interference in deep learning-based fast radio burst search pipelines using a deep convolutional generative adversarial network
10
作者 Wenlong Du Yanling Liu Maozheng Chen 《Astronomical Techniques and Instruments》 2025年第1期10-15,共6页
This paper addresses the performance degradation issue in a fast radio burst search pipeline based on deep learning.This issue is caused by the class imbalance of the radio frequency interference samples in the traini... This paper addresses the performance degradation issue in a fast radio burst search pipeline based on deep learning.This issue is caused by the class imbalance of the radio frequency interference samples in the training dataset,and one solution is applied to improve the distribution of the training data by augmenting minority class samples using a deep convolutional generative adversarial network.Experi.mental results demonstrate that retraining the deep learning model with the newly generated dataset leads to a new fast radio burst classifier,which effectively reduces false positives caused by periodic wide-band impulsive radio frequency interference,thereby enhancing the performance of the search pipeline. 展开更多
关键词 Fast radio burst deep convolutional generative adversarial network Class imbalance Radio frequency interference deep learning
在线阅读 下载PDF
DMF: A Deep Multimodal Fusion-Based Network Traffic Classification Model
11
作者 Xiangbin Wang Qingjun Yuan +3 位作者 Weina Niu Qianwei Meng Yongjuan Wang Chunxiang Gu 《Computers, Materials & Continua》 2025年第5期2267-2285,共19页
With the rise of encrypted traffic,traditional network analysis methods have become less effective,leading to a shift towards deep learning-based approaches.Among these,multimodal learning-based classification methods... With the rise of encrypted traffic,traditional network analysis methods have become less effective,leading to a shift towards deep learning-based approaches.Among these,multimodal learning-based classification methods have gained attention due to their ability to leverage diverse feature sets from encrypted traffic,improving classification accuracy.However,existing research predominantly relies on late fusion techniques,which hinder the full utilization of deep features within the data.To address this limitation,we propose a novel multimodal encrypted traffic classification model that synchronizes modality fusion with multiscale feature extraction.Specifically,our approach performs real-time fusion of modalities at each stage of feature extraction,enhancing feature representation at each level and preserving inter-level correlations for more effective learning.This continuous fusion strategy improves the model’s ability to detect subtle variations in encrypted traffic,while boosting its robustness and adaptability to evolving network conditions.Experimental results on two real-world encrypted traffic datasets demonstrate that our method achieves a classification accuracy of 98.23% and 97.63%,outperforming existing multimodal learning-based methods. 展开更多
关键词 deep fusion intrusion detection multimodal learning network traffic classification
在线阅读 下载PDF
Demand Forecasting of a Microgrid-Powered Electric Vehicle Charging Station Enabled by Emerging Technologies and Deep Recurrent Neural Networks
12
作者 Sahbi Boubaker Adel Mellit +3 位作者 Nejib Ghazouani Walid Meskine Mohamed Benghanem Habib Kraiem 《Computer Modeling in Engineering & Sciences》 2025年第5期2237-2259,共23页
Electric vehicles(EVs)are gradually being deployed in the transportation sector.Although they have a high impact on reducing greenhouse gas emissions,their penetration is challenged by their random energy demand and d... Electric vehicles(EVs)are gradually being deployed in the transportation sector.Although they have a high impact on reducing greenhouse gas emissions,their penetration is challenged by their random energy demand and difficult scheduling of their optimal charging.To cope with these problems,this paper presents a novel approach for photovoltaic grid-connected microgrid EV charging station energy demand forecasting.The present study is part of a comprehensive framework involving emerging technologies such as drones and artificial intelligence designed to support the EVs’charging scheduling task.By using predictive algorithms for solar generation and load demand estimation,this approach aimed at ensuring dynamic and efficient energy flow between the solar energy source,the grid and the electric vehicles.The main contribution of this paper lies in developing an intelligent approach based on deep recurrent neural networks to forecast the energy demand using only its previous records.Therefore,various forecasters based on Long Short-term Memory,Gated Recurrent Unit,and their bi-directional and stacked variants were investigated using a real dataset collected from an EV charging station located at Trieste University(Italy).The developed forecasters have been evaluated and compared according to different metrics,including R,RMSE,MAE,and MAPE.We found that the obtained R values for both PV power generation and energy demand ranged between 97%and 98%.These study findings can be used for reliable and efficient decision-making on the management side of the optimal scheduling of the charging operations. 展开更多
关键词 MICROGRID electric vehicles charging station forecasting deep recurrent neural networks energy management system
在线阅读 下载PDF
ScalaDetect-5G:Ultra High-Precision Highly Elastic Deep Intrusion Detection System for 5G Network
13
作者 Shengjia Chang Baojiang Cui Shaocong Feng 《Computer Modeling in Engineering & Sciences》 2025年第9期3805-3827,共23页
With the rapid advancement of mobile communication networks,key technologies such as Multi-access Edge Computing(MEC)and Network Function Virtualization(NFV)have enhanced the quality of service for 5G users but have a... With the rapid advancement of mobile communication networks,key technologies such as Multi-access Edge Computing(MEC)and Network Function Virtualization(NFV)have enhanced the quality of service for 5G users but have also significantly increased the complexity of network threats.Traditional static defense mechanisms are inadequate for addressing the dynamic and heterogeneous nature of modern attack vectors.To overcome these challenges,this paper presents a novel algorithmic framework,SD-5G,designed for high-precision intrusion detection in 5G environments.SD-5G adopts a three-stage architecture comprising traffic feature extraction,elastic representation,and adaptive classification.Specifically,an enhanced Concrete Autoencoder(CAE)is employed to reconstruct and compress high-dimensional network traffic features,producing compact and expressive representations suitable for large-scale 5G deployments.To further improve accuracy in ambiguous traffic classification,a Residual Convolutional Long Short-Term Memory model with an attention mechanism(ResCLA)is introduced,enabling multi-level modeling of spatial–temporal dependencies and effective detection of subtle anomalies.Extensive experiments on benchmark datasets—including 5G-NIDD,CIC-IDS2017,ToN-IoT,and BoT-IoT—demonstrate that SD-5G consistently achieves F1 scores exceeding 99.19%across diverse network environments,indicating strong generalization and real-time deployment capabilities.Overall,SD-5G achieves a balance between detection accuracy and deployment efficiency,offering a scalable,flexible,and effective solution for intrusion detection in 5G and next-generation networks. 展开更多
关键词 5G security network intrusion detection feature engineering deep learning
在线阅读 下载PDF
Improving Fundus Detection Precision in Diabetic Retinopathy Using Derivative-Based Deep Neural Networks
14
作者 Asma Aldrees Hong Min +2 位作者 Ashit Kumar Dutta Yousef Ibrahim Daradkeh Mohd Anjum 《Computer Modeling in Engineering & Sciences》 2025年第3期2487-2511,共25页
Fundoscopic diagnosis involves assessing the proper functioning of the eye’s nerves,blood vessels,retinal health,and the impact of diabetes on the optic nerves.Fundus disorders are a major global health concern,affec... Fundoscopic diagnosis involves assessing the proper functioning of the eye’s nerves,blood vessels,retinal health,and the impact of diabetes on the optic nerves.Fundus disorders are a major global health concern,affecting millions of people worldwide due to their widespread occurrence.Fundus photography generates machine-based eye images that assist in diagnosing and treating ocular diseases such as diabetic retinopathy.As a result,accurate fundus detection is essential for early diagnosis and effective treatment,helping to prevent severe complications and improve patient outcomes.To address this need,this article introduces a Derivative Model for Fundus Detection using Deep NeuralNetworks(DMFD-DNN)to enhance diagnostic precision.Thismethod selects key features for fundus detection using the least derivative,which identifies features correlating with stored fundus images.Feature filtering relies on the minimum derivative,determined by extracting both similar and varying textures.In this research,the DNN model was integrated with the derivative model.Fundus images were segmented,features were extracted,and the DNN was iteratively trained to identify fundus regions reliably.The goal was to improve the precision of fundoscopic diagnosis by training the DNN incrementally,taking into account the least possible derivative across iterations,and using outputs from previous cycles.The hidden layer of the neural network operates on the most significant derivative,which may reduce precision across iterations.These derivatives are treated as inaccurate,and the model is subsequently trained using selective features and their corresponding extractions.The proposed model outperforms previous techniques in detecting fundus regions,achieving 94.98%accuracy and 91.57%sensitivity,with a minimal error rate of 5.43%.It significantly reduces feature extraction time to 1.462 s and minimizes computational overhead,thereby improving operational efficiency and scalability.Ultimately,the proposed model enhances diagnostic precision and reduces errors,leading to more effective fundus dysfunction diagnosis and treatment. 展开更多
关键词 deep neural network feature extraction fundus detection medical image processing
在线阅读 下载PDF
A survey of backdoor attacks and defenses:From deep neural networks to large language models
15
作者 Ling-Xin Jin Wei Jiang +5 位作者 Xiang-Yu Wen Mei-Yu Lin Jin-Yu Zhan Xing-Zhi Zhou Maregu Assefa Habtie Naoufel Werghi 《Journal of Electronic Science and Technology》 2025年第3期13-35,共23页
Deep neural networks(DNNs)have found extensive applications in safety-critical artificial intelligence systems,such as autonomous driving and facial recognition systems.However,recent research has revealed their susce... Deep neural networks(DNNs)have found extensive applications in safety-critical artificial intelligence systems,such as autonomous driving and facial recognition systems.However,recent research has revealed their susceptibility to backdoors maliciously injected by adversaries.This vulnerability arises due to the intricate architecture and opacity of DNNs,resulting in numerous redundant neurons embedded within the models.Adversaries exploit these vulnerabilities to conceal malicious backdoor information within DNNs,thereby causing erroneous outputs and posing substantial threats to the efficacy of DNN-based applications.This article presents a comprehensive survey of backdoor attacks against DNNs and the countermeasure methods employed to mitigate them.Initially,we trace the evolution of the concept from traditional backdoor attacks to backdoor attacks against DNNs,highlighting the feasibility and practicality of generating backdoor attacks against DNNs.Subsequently,we provide an overview of notable works encompassing various attack and defense strategies,facilitating a comparative analysis of their approaches.Through these discussions,we offer constructive insights aimed at refining these techniques.Finally,we extend our research perspective to the domain of large language models(LLMs)and synthesize the characteristics and developmental trends of backdoor attacks and defense methods targeting LLMs.Through a systematic review of existing studies on backdoor vulnerabilities in LLMs,we identify critical open challenges in this field and propose actionable directions for future research. 展开更多
关键词 Backdoor Attacks Backdoor defenses deep neural networks Large language model
在线阅读 下载PDF
The Blockchain Neural Network Superior to Deep Learning for Improving the Trust of Supply Chain
16
作者 Hsiao-Chun Han Der-Chen Huang 《Computer Modeling in Engineering & Sciences》 2025年第6期3921-3941,共21页
With the increasing importance of supply chain transparency,blockchain-based data has emerged as a valuable and verifiable source for analyzing procurement transaction risks.This study extends the mathematical model a... With the increasing importance of supply chain transparency,blockchain-based data has emerged as a valuable and verifiable source for analyzing procurement transaction risks.This study extends the mathematical model and proof of‘the Overall Performance Characteristics of the Supply Chain’to encompass multiple variables within blockchain data.Utilizing graph theory,the model is further developed into a single-layer neural network,which serves as the foundation for constructing two multi-layer deep learning neural network models,Feedforward Neural Network(abbreviated as FNN)and Deep Clustering Network(abbreviated as DCN).Furthermore,this study retrieves corporate data from the Chunghwa Yellow Pages online resource and Taiwan Economic Journal database(abbreviated as TEJ).These data are then virtualized using‘the Metaverse Algorithm’,and the selected virtualized blockchain variables are utilized to train a neural network model for classification.The results demonstrate that a single-layer neural network model,leveraging blockchain data and employing the Proof of Relation algorithm(abbreviated as PoR)as the activation function,effectively identifies anomalous enterprises,which constitute 7.2%of the total sample,aligning with expectations.In contrast,the multi-layer neural network models,DCN and FNN,classify an excessively large proportion of enterprises as anomalous(ranging from one-fourth to one-third),which deviates from expectations.This indicates that deep learning may still be inadequate in effectively capturing or identifying malicious corporate behaviors associated with distortions in procurement transaction data.In other words,procurement transaction blockchain data possesses intrinsic value that cannot be replaced by artificial intelligence(abbreviated as AI). 展开更多
关键词 Blockchain neural network deep learning consensus algorithm supply chain management information security management
在线阅读 下载PDF
Dynamic Clustering Method for Underwater Wireless Sensor Networks based on Deep Reinforcement Learning
17
作者 Kohyar Bolvary Zadeh Dashtestani Reza Javidan Reza Akbari 《哈尔滨工程大学学报(英文版)》 2025年第4期864-876,共13页
Underwater wireless sensor networks(UWSNs)have emerged as a new paradigm of real-time organized systems,which are utilized in a diverse array of scenarios to manage the underwater environment surrounding them.One of t... Underwater wireless sensor networks(UWSNs)have emerged as a new paradigm of real-time organized systems,which are utilized in a diverse array of scenarios to manage the underwater environment surrounding them.One of the major challenges that these systems confront is topology control via clustering,which reduces the overload of wireless communications within a network and ensures low energy consumption and good scalability.This study aimed to present a clustering technique in which the clustering process and cluster head(CH)selection are performed based on the Markov decision process and deep reinforcement learning(DRL).DRL algorithm selects the CH by maximizing the defined reward function.Subsequently,the sensed data are collected by the CHs and then sent to the autonomous underwater vehicles.In the final phase,the consumed energy by each sensor is calculated,and its residual energy is updated.Then,the autonomous underwater vehicle performs all clustering and CH selection operations.This procedure persists until the point of cessation when the sensor’s power has been reduced to such an extent that no node can become a CH.Through analysis of the findings from this investigation and their comparison with alternative frameworks,the implementation of this method can be used to control the cluster size and the number of CHs,which ultimately augments the energy usage of nodes and prolongs the lifespan of the network.Our simulation results illustrate that the suggested methodology surpasses the conventional low-energy adaptive clustering hierarchy,the distance-and energy-constrained K-means clustering scheme,and the vector-based forward protocol and is viable for deployment in an actual operational environment. 展开更多
关键词 Underwater wireless sensor network CLUSTERING Cluster head selection deep reinforcement learning
暂未订购
Big Texture Dataset Synthesized Based on Gradient and Convolution Kernels Using Pre-Trained Deep Neural Networks
18
作者 Farhan A.Alenizi Faten Khalid Karim +1 位作者 Alaa R.Al-Shamasneh Mohammad Hossein Shakoor 《Computer Modeling in Engineering & Sciences》 2025年第8期1793-1829,共37页
Deep neural networks provide accurate results for most applications.However,they need a big dataset to train properly.Providing a big dataset is a significant challenge in most applications.Image augmentation refers t... Deep neural networks provide accurate results for most applications.However,they need a big dataset to train properly.Providing a big dataset is a significant challenge in most applications.Image augmentation refers to techniques that increase the amount of image data.Common operations for image augmentation include changes in illumination,rotation,contrast,size,viewing angle,and others.Recently,Generative Adversarial Networks(GANs)have been employed for image generation.However,like image augmentation methods,GAN approaches can only generate images that are similar to the original images.Therefore,they also cannot generate new classes of data.Texture images presentmore challenges than general images,and generating textures is more complex than creating other types of images.This study proposes a gradient-based deep neural network method that generates a new class of texture.It is possible to rapidly generate new classes of textures using different kernels from pre-trained deep networks.After generating new textures for each class,the number of textures increases through image augmentation.During this process,several techniques are proposed to automatically remove incomplete and similar textures that are created.The proposed method is faster than some well-known generative networks by around 4 to 10 times.In addition,the quality of the generated textures surpasses that of these networks.The proposed method can generate textures that surpass those of someGANs and parametric models in certain image qualitymetrics.It can provide a big texture dataset to train deep networks.A new big texture dataset is created artificially using the proposed method.This dataset is approximately 2 GB in size and comprises 30,000 textures,each 150×150 pixels in size,organized into 600 classes.It is uploaded to the Kaggle site and Google Drive.This dataset is called BigTex.Compared to other texture datasets,the proposed dataset is the largest and can serve as a comprehensive texture dataset for training more powerful deep neural networks and mitigating overfitting. 展开更多
关键词 Big texture dataset data generation pre-trained deep neural network
在线阅读 下载PDF
Resource Allocation in V2X Networks:A Double Deep Q-Network Approach with Graph Neural Networks
19
作者 Zhengda Huan Jian Sun +3 位作者 Zeyu Chen Ziyi Zhang Xiao Sun Zenghui Xiao 《Computers, Materials & Continua》 2025年第9期5427-5443,共17页
With the advancement of Vehicle-to-Everything(V2X)technology,efficient resource allocation in dynamic vehicular networks has become a critical challenge for achieving optimal performance.Existing methods suffer from h... With the advancement of Vehicle-to-Everything(V2X)technology,efficient resource allocation in dynamic vehicular networks has become a critical challenge for achieving optimal performance.Existing methods suffer from high computational complexity and decision latency under high-density traffic and heterogeneous network conditions.To address these challenges,this study presents an innovative framework that combines Graph Neural Networks(GNNs)with a Double Deep Q-Network(DDQN),utilizing dynamic graph structures and reinforcement learning.An adaptive neighbor sampling mechanism is introduced to dynamically select the most relevant neighbors based on interference levels and network topology,thereby improving decision accuracy and efficiency.Meanwhile,the framework models communication links as nodes and interference relationships as edges,effectively capturing the direct impact of interference on resource allocation while reducing computational complexity and preserving critical interaction information.Employing an aggregation mechanism based on the Graph Attention Network(GAT),it dynamically adjusts the neighbor sampling scope and performs attention-weighted aggregation based on node importance,ensuring more efficient and adaptive resource management.This design ensures reliable Vehicle-to-Vehicle(V2V)communication while maintaining high Vehicle-to-Infrastructure(V2I)throughput.The framework retains the global feature learning capabilities of GNNs and supports distributed network deployment,allowing vehicles to extract low-dimensional graph embeddings from local observations for real-time resource decisions.Experimental results demonstrate that the proposed method significantly reduces computational overhead,mitigates latency,and improves resource utilization efficiency in vehicular networks under complex traffic scenarios.This research not only provides a novel solution to resource allocation challenges in V2X networks but also advances the application of DDQN in intelligent transportation systems,offering substantial theoretical significance and practical value. 展开更多
关键词 Resource allocation V2X double deep Q-network graph neural network
在线阅读 下载PDF
Deep reinforcement learning based latency-energy minimization in smart healthcare network
20
作者 Xin Su Xin Fang +2 位作者 Zhen Cheng Ziyang Gong Chang Choi 《Digital Communications and Networks》 2025年第3期795-805,共11页
Significant breakthroughs in the Internet of Things(IoT)and 5G technologies have driven several smart healthcare activities,leading to a flood of computationally intensive applications in smart healthcare networks.Mob... Significant breakthroughs in the Internet of Things(IoT)and 5G technologies have driven several smart healthcare activities,leading to a flood of computationally intensive applications in smart healthcare networks.Mobile Edge Computing(MEC)is considered as an efficient solution to provide powerful computing capabilities to latency or energy sensitive nodes.The low-latency and high-reliability requirements of healthcare application services can be met through optimal offloading and resource allocation for the computational tasks of the nodes.In this study,we established a system model consisting of two types of nodes by considering nondivisible and trade-off computational tasks between latency and energy consumption.To minimize processing cost of the system tasks,a Mixed-Integer Nonlinear Programming(MINLP)task offloading problem is proposed.Furthermore,this problem is decomposed into task offloading decisions and resource allocation problems.The resource allocation problem is solved using traditional optimization algorithms,and the offloading decision problem is solved using a deep reinforcement learning algorithm.We propose an Online Offloading based on the Deep Reinforcement Learning(OO-DRL)algorithm with parallel deep neural networks and a weightsensitive experience replay mechanism.Simulation results show that,compared with several existing methods,our proposed algorithm can perform real-time task offloading in a smart healthcare network in dynamically varying environments and reduce the system task processing cost. 展开更多
关键词 Smart healthcare network Mobile edge computing Resource allocation Computation offloading deep reinforcement learning
在线阅读 下载PDF
上一页 1 2 250 下一页 到第
使用帮助 返回顶部