期刊文献+
共找到22,154篇文章
< 1 2 250 >
每页显示 20 50 100
A Comparative Benchmark of Machine and Deep Learning for Cyberattack Detection in IoT Networks
1
作者 Enzo Hoummady Fehmi Jaafar 《Computers, Materials & Continua》 2026年第4期1070-1092,共23页
With the proliferation of Internet of Things(IoT)devices,securing these interconnected systems against cyberattacks has become a critical challenge.Traditional security paradigms often fail to cope with the scale and ... With the proliferation of Internet of Things(IoT)devices,securing these interconnected systems against cyberattacks has become a critical challenge.Traditional security paradigms often fail to cope with the scale and diversity of IoT network traffic.This paper presents a comparative benchmark of classic machine learning(ML)and state-of-the-art deep learning(DL)algorithms for IoT intrusion detection.Our methodology employs a twophased approach:a preliminary pilot study using a custom-generated dataset to establish baselines,followed by a comprehensive evaluation on the large-scale CICIoTDataset2023.We benchmarked algorithms including Random Forest,XGBoost,CNN,and StackedLSTM.The results indicate that while top-performingmodels frombothcategories achieve over 99%classification accuracy,this metric masks a crucial performance trade-off.We demonstrate that treebased ML ensembles exhibit superior precision(91%)in identifying benign traffic,making them effective at reducing false positives.Conversely,DL models demonstrate superior recall(96%),making them better suited for minimizing the interruption of legitimate traffic.We conclude that the selection of an optimal model is not merely a matter of maximizing accuracy but is a strategic choice dependent on the specific security priority either minimizing false alarms or ensuring service availability.Thiswork provides a practical framework for deploying context-aware security solutions in diverse IoT environments. 展开更多
关键词 Internet of Things deep learning abnormal network traffic cyberattacks machine learning
在线阅读 下载PDF
Dynamic Resource Allocation for Multi-Priority Requests Based on Deep Reinforcement Learning in Elastic Optical Network
2
作者 Zhou Yang Yang Xin +1 位作者 Sun Qiang Yang Zhuojia 《China Communications》 2026年第2期312-327,共16页
As the types of traffic requests increase,the elastic optical network(EON)is considered as a promising architecture to carry multiple types of traffic requests simultaneously,including immediate reservation(IR)and adv... As the types of traffic requests increase,the elastic optical network(EON)is considered as a promising architecture to carry multiple types of traffic requests simultaneously,including immediate reservation(IR)and advance reservation(AR).Various resource allocation schemes for IR/AR requests have been designed in EON to reduce bandwidth blocking probability(BBP).However,these schemes do not consider different transmission requirements of IR requests and cannot maintain a low BBP for high-priority requests.In this paper,multi-priority is considered in the hybrid IR/AR request scenario.We modify the asynchronous advantage actor critic(A3C)model and propose an A3C-assisted priority resource allocation(APRA)algorithm.The APRA integrates priority and transmission quality of IR requests to design the A3C reward function,then dynamically allocates dedicated resources for different IR requests according to the time-varying requirements.By maximizing the reward,the transmission quality of IR requests can be matched with the priority,and lower BBP for high-priority IR requests can be ensured.Simulation results show that the APRA reduces the BBP of high-priority IR requests from 0.0341 to0.0138,and the overall network operation gain is improved by 883 compared to the scheme without considering the priority. 展开更多
关键词 deep reinforcement learning dynamic resource allocation elastic optical network multipriority requests
在线阅读 下载PDF
Robust Voltage Control for Active Distribution Networks via Safe Deep Reinforcement Learning Against State Perturbations
3
作者 Meng Tian Xiaoxu Li +3 位作者 Ziyang Zhu Zhengcheng Dong Li Gong Jingang Lai 《Protection and Control of Modern Power Systems》 2026年第1期192-207,共16页
With the prevalence of renewable distributed energy resources(DERs)such as photovoltaics(PVs),modern active distribution networks(ADNs)suffer from voltage deviation and power quality issues.However,traditional voltage... With the prevalence of renewable distributed energy resources(DERs)such as photovoltaics(PVs),modern active distribution networks(ADNs)suffer from voltage deviation and power quality issues.However,traditional voltage control methods often face a trade-off between efficiency and effectiveness,and rarely ensure robust voltage safety under typical state perturbations in practical distribution grids.In this paper,a robust model-free voltage regulation approach is proposed which simultaneously takes security and robustness into account.In this context,the voltage control problem is formulated as a constrained Markov decision process(CMDP).A safety-augmented multiagent deep deterministic policy gradient(MADDPG)algorithm is the trained to enable real-time collaborative optimization of ADNs,aiming to maintain nodal voltages within safe operational limits while minimizing total line losses.Moreover,a robust regulation loss is introduced to ensure reliable performance under various state perturbations in practical voltage controls.The proposed regulation algorithm effectively balance efficiency,safety,and robustness,and also demonstrates potential for generalizing these characteristics to other applications.Numerical studies vali-date the robustness of the proposed method under varying state perturbations on the IEEE test cases and the optimal integrated control performance when compared to other benchmarks. 展开更多
关键词 Active distribution network robust voltage control state perturbation model-free safe deep reinforcement learning
在线阅读 下载PDF
Enhanced multi-agent deep reinforcement learning for efficient task offloading and resource allocation in vehicular networks
4
作者 Long Xu Jiale Tan Hongcheng Zhuang 《Digital Communications and Networks》 2026年第1期66-75,共10页
In response to the rising demand for low-latency,computation-intensive applications in vehicular networks,this paper proposes an adaptive task offloading approach for Vehicle-to-Everything(V2X)environments.Leveraging ... In response to the rising demand for low-latency,computation-intensive applications in vehicular networks,this paper proposes an adaptive task offloading approach for Vehicle-to-Everything(V2X)environments.Leveraging an enhanced Multi-Agent Deep Deterministic Policy Gradient(MADDPG)algorithm with an attention mechanism,the proposed approach optimizes computation offloading and resource allocation,aiming to minimize energy consumption and service delay.In this paper,vehicles dynamically offload computing-intensive tasks to both nearby vehicles through V2V links and roadside units through V2I links.The adaptive attention mechanism enables the system to prioritize relevant state information,leading to faster convergence.Simulations conducted in a realistic urban V2X scenario demonstrate that the proposed Attention-enhanced MADDPG(AT-MADDPG)algorithm significantly improves performance,achieving notable reductions in both energy consumption and latency compared to baseline algorithms,especially in high-demand,dynamic scenarios. 展开更多
关键词 Computation offloading Vehicular networks deep reinforcement learning Adaptive offloading Spectrum and power allocation
在线阅读 下载PDF
Deep neural network based on adversarial training for short-term high-resolution precipitation nowcasting from radar echo images
5
作者 Ruikai YANG Shuangjian JIAO Nan YANG 《Journal of Oceanology and Limnology》 2026年第1期85-98,共14页
Precipitation nowcasting is of great importance for disaster prevention and mitigation.However,precipitation is a complex spatio-temporal phenomenon influenced by various underlying physical factors.Even slight change... Precipitation nowcasting is of great importance for disaster prevention and mitigation.However,precipitation is a complex spatio-temporal phenomenon influenced by various underlying physical factors.Even slight changes in the initial precipitation field can have a significant impact on the future precipitation patterns,making the nowcasting of short-term high-resolution precipitation a major challenge.Traditional deep learning methods often have difficulty capturing the long-term spatial dependence of precipitation and are usually at a low resolution.To address these issues,based upon the Simpler yet Better Video Prediction(SimVP)framework,we proposed a deep generative neural network that incorporates the Simple Parameter-Free Attention Module(SimAM)and Generative Adversarial Networks(GANs)for short-term high-resolution precipitation event forecasting.Through an adversarial training strategy,critical precipitation features were extracted from complex radar echo images.During the adversarial learning process,the dynamic competition between the generator and the discriminator could continuously enhance the model in prediction accuracy and resolution for short-term precipitation.Experimental results demonstrate that the proposed method could effectively forecast short-term precipitation events on various scales and showed the best overall performance among existing methods. 展开更多
关键词 precipitation nowcasting deep learning Simple Parameter-Free Attention Module(SimAM) Generative Adversarial networks(GANs)
在线阅读 下载PDF
A Convolutional Neural Network-Based Deep Support Vector Machine for Parkinson’s Disease Detection with Small-Scale and Imbalanced Datasets
6
作者 Kwok Tai Chui Varsha Arya +2 位作者 Brij B.Gupta Miguel Torres-Ruiz Razaz Waheeb Attar 《Computers, Materials & Continua》 2026年第1期1410-1432,共23页
Parkinson’s disease(PD)is a debilitating neurological disorder affecting over 10 million people worldwide.PD classification models using voice signals as input are common in the literature.It is believed that using d... Parkinson’s disease(PD)is a debilitating neurological disorder affecting over 10 million people worldwide.PD classification models using voice signals as input are common in the literature.It is believed that using deep learning algorithms further enhances performance;nevertheless,it is challenging due to the nature of small-scale and imbalanced PD datasets.This paper proposed a convolutional neural network-based deep support vector machine(CNN-DSVM)to automate the feature extraction process using CNN and extend the conventional SVM to a DSVM for better classification performance in small-scale PD datasets.A customized kernel function reduces the impact of biased classification towards the majority class(healthy candidates in our consideration).An improved generative adversarial network(IGAN)was designed to generate additional training data to enhance the model’s performance.For performance evaluation,the proposed algorithm achieves a sensitivity of 97.6%and a specificity of 97.3%.The performance comparison is evaluated from five perspectives,including comparisons with different data generation algorithms,feature extraction techniques,kernel functions,and existing works.Results reveal the effectiveness of the IGAN algorithm,which improves the sensitivity and specificity by 4.05%–4.72%and 4.96%–5.86%,respectively;and the effectiveness of the CNN-DSVM algorithm,which improves the sensitivity by 1.24%–57.4%and specificity by 1.04%–163%and reduces biased detection towards the majority class.The ablation experiments confirm the effectiveness of individual components.Two future research directions have also been suggested. 展开更多
关键词 Convolutional neural network data generation deep support vector machine feature extraction generative artificial intelligence imbalanced dataset medical diagnosis Parkinson’s disease small-scale dataset
在线阅读 下载PDF
改进Deep Q Networks的交通信号均衡调度算法
7
作者 贺道坤 《机械设计与制造》 北大核心 2025年第4期135-140,共6页
为进一步缓解城市道路高峰时段十字路口的交通拥堵现象,实现路口各道路车流均衡通过,基于改进Deep Q Networks提出了一种的交通信号均衡调度算法。提取十字路口与交通信号调度最相关的特征,分别建立单向十字路口交通信号模型和线性双向... 为进一步缓解城市道路高峰时段十字路口的交通拥堵现象,实现路口各道路车流均衡通过,基于改进Deep Q Networks提出了一种的交通信号均衡调度算法。提取十字路口与交通信号调度最相关的特征,分别建立单向十字路口交通信号模型和线性双向十字路口交通信号模型,并基于此构建交通信号调度优化模型;针对Deep Q Networks算法在交通信号调度问题应用中所存在的收敛性、过估计等不足,对Deep Q Networks进行竞争网络改进、双网络改进以及梯度更新策略改进,提出相适应的均衡调度算法。通过与经典Deep Q Networks仿真比对,验证论文算法对交通信号调度问题的适用性和优越性。基于城市道路数据,分别针对两种场景进行仿真计算,仿真结果表明该算法能够有效缩减十字路口车辆排队长度,均衡各路口车流通行量,缓解高峰出行方向的道路拥堵现象,有利于十字路口交通信号调度效益的提升。 展开更多
关键词 交通信号调度 十字路口 deep Q networks 深度强化学习 智能交通
在线阅读 下载PDF
DEEP NEURAL NETWORKS COMBINING MULTI-TASK LEARNING FOR SOLVING DELAY INTEGRO-DIFFERENTIAL EQUATIONS 被引量:1
8
作者 WANG Chen-yao SHI Feng 《数学杂志》 2025年第1期13-38,共26页
Deep neural networks(DNNs)are effective in solving both forward and inverse problems for nonlinear partial differential equations(PDEs).However,conventional DNNs are not effective in handling problems such as delay di... Deep neural networks(DNNs)are effective in solving both forward and inverse problems for nonlinear partial differential equations(PDEs).However,conventional DNNs are not effective in handling problems such as delay differential equations(DDEs)and delay integrodifferential equations(DIDEs)with constant delays,primarily due to their low regularity at delayinduced breaking points.In this paper,a DNN method that combines multi-task learning(MTL)which is proposed to solve both the forward and inverse problems of DIDEs.The core idea of this approach is to divide the original equation into multiple tasks based on the delay,using auxiliary outputs to represent the integral terms,followed by the use of MTL to seamlessly incorporate the properties at the breaking points into the loss function.Furthermore,given the increased training dificulty associated with multiple tasks and outputs,we employ a sequential training scheme to reduce training complexity and provide reference solutions for subsequent tasks.This approach significantly enhances the approximation accuracy of solving DIDEs with DNNs,as demonstrated by comparisons with traditional DNN methods.We validate the effectiveness of this method through several numerical experiments,test various parameter sharing structures in MTL and compare the testing results of these structures.Finally,this method is implemented to solve the inverse problem of nonlinear DIDE and the results show that the unknown parameters of DIDE can be discovered with sparse or noisy data. 展开更多
关键词 Delay integro-differential equation Multi-task learning parameter sharing structure deep neural network sequential training scheme
在线阅读 下载PDF
Integration of deep neural network modeling and LC-MS-based pseudo-targeted metabolomics to discriminate easily confused ginseng species 被引量:2
9
作者 Meiting Jiang Yuyang Sha +8 位作者 Yadan Zou Xiaoyan Xu Mengxiang Ding Xu Lian Hongda Wang Qilong Wang Kefeng Li De-an Guo Wenzhi Yang 《Journal of Pharmaceutical Analysis》 2025年第1期126-137,共12页
Metabolomics covers a wide range of applications in life sciences,biomedicine,and phytology.Data acquisition(to achieve high coverage and efficiency)and analysis(to pursue good classification)are two key segments invo... Metabolomics covers a wide range of applications in life sciences,biomedicine,and phytology.Data acquisition(to achieve high coverage and efficiency)and analysis(to pursue good classification)are two key segments involved in metabolomics workflows.Various chemometric approaches utilizing either pattern recognition or machine learning have been employed to separate different groups.However,insufficient feature extraction,inappropriate feature selection,overfitting,or underfitting lead to an insufficient capacity to discriminate plants that are often easily confused.Using two ginseng varieties,namely Panax japonicus(PJ)and Panax japonicus var.major(PJvm),containing the similar ginsenosides,we integrated pseudo-targeted metabolomics and deep neural network(DNN)modeling to achieve accurate species differentiation.A pseudo-targeted metabolomics approach was optimized through data acquisition mode,ion pairs generation,comparison between multiple reaction monitoring(MRM)and scheduled MRM(sMRM),and chromatographic elution gradient.In total,1980 ion pairs were monitored within 23 min,allowing for the most comprehensive ginseng metabolome analysis.The established DNN model demonstrated excellent classification performance(in terms of accuracy,precision,recall,F1 score,area under the curve,and receiver operating characteristic(ROC))using the entire metabolome data and feature-selection dataset,exhibiting superior advantages over random forest(RF),support vector machine(SVM),extreme gradient boosting(XGBoost),and multilayer perceptron(MLP).Moreover,DNNs were advantageous for automated feature learning,nonlinear modeling,adaptability,and generalization.This study confirmed practicality of the established strategy for efficient metabolomics data analysis and reliable classification performance even when using small-volume samples.This established approach holds promise for plant metabolomics and is not limited to ginseng. 展开更多
关键词 Liquid chromatography-mass spectrometry Pseudo-targeted metabolomics deep neural network Species differentiation GINSENG
在线阅读 下载PDF
Lightweight deep network and projection loss for eye semantic segmentation
10
作者 Qinjie Wang Tengfei Wang +1 位作者 Lizhuang Yang Hai Li 《中国科学技术大学学报》 北大核心 2025年第7期59-68,58,I0002,共12页
Semantic segmentation of eye images is a complex task with important applications in human–computer interaction,cognitive science,and neuroscience.Achieving real-time,accurate,and robust segmentation algorithms is cr... Semantic segmentation of eye images is a complex task with important applications in human–computer interaction,cognitive science,and neuroscience.Achieving real-time,accurate,and robust segmentation algorithms is crucial for computationally limited portable devices such as augmented reality and virtual reality.With the rapid advancements in deep learning,many network models have been developed specifically for eye image segmentation.Some methods divide the segmentation process into multiple stages to achieve model parameter miniaturization while enhancing output through post processing techniques to improve segmentation accuracy.These approaches significantly increase the inference time.Other networks adopt more complex encoding and decoding modules to achieve end-to-end output,which requires substantial computation.Therefore,balancing the model’s size,accuracy,and computational complexity is essential.To address these challenges,we propose a lightweight asymmetric UNet architecture and a projection loss function.We utilize ResNet-3 layer blocks to enhance feature extraction efficiency in the encoding stage.In the decoding stage,we employ regular convolutions and skip connections to upscale the feature maps from the latent space to the original image size,balancing the model size and segmentation accuracy.In addition,we leverage the geometric features of the eye region and design a projection loss function to further improve the segmentation accuracy without adding any additional inference computational cost.We validate our approach on the OpenEDS2019 dataset for virtual reality and achieve state-of-the-art performance with 95.33%mean intersection over union(mIoU).Our model has only 0.63M parameters and 350 FPS,which are 68%and 200%of the state-of-the-art model RITNet,respectively. 展开更多
关键词 lightweight deep network projection loss real-time semantic segmentation convolutional neural networks END-TO-END
在线阅读 下载PDF
Intrusion Detection Model on Network Data with Deep Adaptive Multi-Layer Attention Network(DAMLAN)
11
作者 Fatma S.Alrayes Syed Umar Amin +2 位作者 Nada Ali Hakami Mohammed K.Alzaylaee Tariq Kashmeery 《Computer Modeling in Engineering & Sciences》 2025年第7期581-614,共34页
The growing incidence of cyberattacks necessitates a robust and effective Intrusion Detection Systems(IDS)for enhanced network security.While conventional IDSs can be unsuitable for detecting different and emerging at... The growing incidence of cyberattacks necessitates a robust and effective Intrusion Detection Systems(IDS)for enhanced network security.While conventional IDSs can be unsuitable for detecting different and emerging attacks,there is a demand for better techniques to improve detection reliability.This study introduces a new method,the Deep Adaptive Multi-Layer Attention Network(DAMLAN),to boost the result of intrusion detection on network data.Due to its multi-scale attention mechanisms and graph features,DAMLAN aims to address both known and unknown intrusions.The real-world NSL-KDD dataset,a popular choice among IDS researchers,is used to assess the proposed model.There are 67,343 normal samples and 58,630 intrusion attacks in the training set,12,833 normal samples,and 9711 intrusion attacks in the test set.Thus,the proposed DAMLAN method is more effective than the standard models due to the consideration of patterns by the attention layers.The experimental performance of the proposed model demonstrates that it achieves 99.26%training accuracy and 90.68%testing accuracy,with precision reaching 98.54%on the training set and 96.64%on the testing set.The recall and F1 scores again support the model with training set values of 99.90%and 99.21%and testing set values of 86.65%and 91.37%.These results provide a strong basis for the claims made regarding the model’s potential to identify intrusion attacks and affirm its relatively strong overall performance,irrespective of type.Future work would employ more attempts to extend the scalability and applicability of DAMLAN for real-time use in intrusion detection systems. 展开更多
关键词 Intrusion detection deep adaptive networks multi-layer attention DAMLAN network security anomaly detection
在线阅读 下载PDF
A Modified Deep Residual-Convolutional Neural Network for Accurate Imputation of Missing Data
12
作者 Firdaus Firdaus Siti Nurmaini +8 位作者 Anggun Islami Annisa Darmawahyuni Ade Iriani Sapitri Muhammad Naufal Rachmatullah Bambang Tutuko Akhiar Wista Arum Muhammad Irfan Karim Yultrien Yultrien Ramadhana Noor Salassa Wandya 《Computers, Materials & Continua》 2025年第2期3419-3441,共23页
Handling missing data accurately is critical in clinical research, where data quality directly impacts decision-making and patient outcomes. While deep learning (DL) techniques for data imputation have gained attentio... Handling missing data accurately is critical in clinical research, where data quality directly impacts decision-making and patient outcomes. While deep learning (DL) techniques for data imputation have gained attention, challenges remain, especially when dealing with diverse data types. In this study, we introduce a novel data imputation method based on a modified convolutional neural network, specifically, a Deep Residual-Convolutional Neural Network (DRes-CNN) architecture designed to handle missing values across various datasets. Our approach demonstrates substantial improvements over existing imputation techniques by leveraging residual connections and optimized convolutional layers to capture complex data patterns. We evaluated the model on publicly available datasets, including Medical Information Mart for Intensive Care (MIMIC-III and MIMIC-IV), which contain critical care patient data, and the Beijing Multi-Site Air Quality dataset, which measures environmental air quality. The proposed DRes-CNN method achieved a root mean square error (RMSE) of 0.00006, highlighting its high accuracy and robustness. We also compared with Low Light-Convolutional Neural Network (LL-CNN) and U-Net methods, which had RMSE values of 0.00075 and 0.00073, respectively. This represented an improvement of approximately 92% over LL-CNN and 91% over U-Net. The results showed that this DRes-CNN-based imputation method outperforms current state-of-the-art models. These results established DRes-CNN as a reliable solution for addressing missing data. 展开更多
关键词 Data imputation missing data deep learning deep residual convolutional neural network
在线阅读 下载PDF
Deep residual systolic network for massive MIMO channel estimation by joint training strategies of mixed-SNR and mixed-scenarios
13
作者 SUN Meng JING Qingfeng ZHONG Weizhi 《Journal of Systems Engineering and Electronics》 2025年第4期903-913,共11页
The fifth-generation (5G) communication requires a highly accurate estimation of the channel state information (CSI)to take advantage of the massive multiple-input multiple-output(MIMO) system. However, traditional ch... The fifth-generation (5G) communication requires a highly accurate estimation of the channel state information (CSI)to take advantage of the massive multiple-input multiple-output(MIMO) system. However, traditional channel estimation methods do not always yield reliable estimates. The methodology of this paper consists of deep residual shrinkage network (DRSN)neural network-based method that is used to solve this problem.Thus, the channel estimation approach, based on DRSN with its learning ability of noise-containing data, is first introduced. Then,the DRSN is used to train the noise reduction process based on the results of the least square (LS) channel estimation while applying the pilot frequency subcarriers, where the initially estimated subcarrier channel matrix is considered as a three-dimensional tensor of the DRSN input. Afterward, a mixed signal to noise ratio (SNR) training data strategy is proposed based on the learning ability of DRSN under different SNRs. Moreover, a joint mixed scenario training strategy is carried out to test the multi scenarios robustness of DRSN. As for the findings, the numerical results indicate that the DRSN method outperforms the spatial-frequency-temporal convolutional neural networks (SF-CNN)with similar computational complexity and achieves better advantages in the full SNR range than the minimum mean squared error (MMSE) estimator with a limited dataset. Moreover, the DRSN approach shows robustness in different propagation environments. 展开更多
关键词 massive multiple-input multiple-output(MIMO) channel estimation deep residual shrinkage network(DRSN) deep convolutional neural network(CNN).
在线阅读 下载PDF
Handling class imbalance of radio frequency interference in deep learning-based fast radio burst search pipelines using a deep convolutional generative adversarial network
14
作者 Wenlong Du Yanling Liu Maozheng Chen 《Astronomical Techniques and Instruments》 2025年第1期10-15,共6页
This paper addresses the performance degradation issue in a fast radio burst search pipeline based on deep learning.This issue is caused by the class imbalance of the radio frequency interference samples in the traini... This paper addresses the performance degradation issue in a fast radio burst search pipeline based on deep learning.This issue is caused by the class imbalance of the radio frequency interference samples in the training dataset,and one solution is applied to improve the distribution of the training data by augmenting minority class samples using a deep convolutional generative adversarial network.Experi.mental results demonstrate that retraining the deep learning model with the newly generated dataset leads to a new fast radio burst classifier,which effectively reduces false positives caused by periodic wide-band impulsive radio frequency interference,thereby enhancing the performance of the search pipeline. 展开更多
关键词 Fast radio burst deep convolutional generative adversarial network Class imbalance Radio frequency interference deep learning
在线阅读 下载PDF
DMF: A Deep Multimodal Fusion-Based Network Traffic Classification Model
15
作者 Xiangbin Wang Qingjun Yuan +3 位作者 Weina Niu Qianwei Meng Yongjuan Wang Chunxiang Gu 《Computers, Materials & Continua》 2025年第5期2267-2285,共19页
With the rise of encrypted traffic,traditional network analysis methods have become less effective,leading to a shift towards deep learning-based approaches.Among these,multimodal learning-based classification methods... With the rise of encrypted traffic,traditional network analysis methods have become less effective,leading to a shift towards deep learning-based approaches.Among these,multimodal learning-based classification methods have gained attention due to their ability to leverage diverse feature sets from encrypted traffic,improving classification accuracy.However,existing research predominantly relies on late fusion techniques,which hinder the full utilization of deep features within the data.To address this limitation,we propose a novel multimodal encrypted traffic classification model that synchronizes modality fusion with multiscale feature extraction.Specifically,our approach performs real-time fusion of modalities at each stage of feature extraction,enhancing feature representation at each level and preserving inter-level correlations for more effective learning.This continuous fusion strategy improves the model’s ability to detect subtle variations in encrypted traffic,while boosting its robustness and adaptability to evolving network conditions.Experimental results on two real-world encrypted traffic datasets demonstrate that our method achieves a classification accuracy of 98.23% and 97.63%,outperforming existing multimodal learning-based methods. 展开更多
关键词 deep fusion intrusion detection multimodal learning network traffic classification
在线阅读 下载PDF
Forecasting electricity prices in the spot market utilizing wavelet packet decomposition integrated with a hybrid deep neural network
16
作者 Heping Jia Yuchen Guo +5 位作者 Xiaobin Zhang Qianxin Ma Zhenglin Yang Yaxian Zheng Dan Zeng Dunnan Liu 《Global Energy Interconnection》 2025年第5期874-890,共17页
Accurate forecasting of electricity spot prices is crucial for market participants in formulating bidding strategies.However,the extreme volatility of electricity spot prices,influenced by various factors,poses signif... Accurate forecasting of electricity spot prices is crucial for market participants in formulating bidding strategies.However,the extreme volatility of electricity spot prices,influenced by various factors,poses significant challenges for forecasting.To address the data uncertainty of electricity prices and effectively mitigate gradient issues,overfitting,and computational challenges associated with using a single model during forecasting,this paper proposes a framework for forecasting spot market electricity prices by integrating wavelet packet decomposition(WPD)with a hybrid deep neural network.By ensuring accurate data decomposition,the WPD algorithm aids in detecting fluctuating patterns and isolating random noise.The hybrid model integrates temporal convolutional networks(TCN)and long short-term memory(LSTM)networks to enhance feature extraction and improve forecasting performance.Compared to other techniques,it significantly reduces average errors,decreasing mean absolute error(MAE)by 27.3%,root mean square error(RMSE)by 66.9%,and mean absolute percentage error(MAPE)by 22.8%.This framework effectively captures the intricate fluctuations present in the time series,resulting in more accurate and reliable predictions. 展开更多
关键词 Electricity price forecasting Long and short-term memory Hybrid deep neural network Wavelet packet decomposition Temporal neural network
在线阅读 下载PDF
ScalaDetect-5G:Ultra High-Precision Highly Elastic Deep Intrusion Detection System for 5G Network
17
作者 Shengjia Chang Baojiang Cui Shaocong Feng 《Computer Modeling in Engineering & Sciences》 2025年第9期3805-3827,共23页
With the rapid advancement of mobile communication networks,key technologies such as Multi-access Edge Computing(MEC)and Network Function Virtualization(NFV)have enhanced the quality of service for 5G users but have a... With the rapid advancement of mobile communication networks,key technologies such as Multi-access Edge Computing(MEC)and Network Function Virtualization(NFV)have enhanced the quality of service for 5G users but have also significantly increased the complexity of network threats.Traditional static defense mechanisms are inadequate for addressing the dynamic and heterogeneous nature of modern attack vectors.To overcome these challenges,this paper presents a novel algorithmic framework,SD-5G,designed for high-precision intrusion detection in 5G environments.SD-5G adopts a three-stage architecture comprising traffic feature extraction,elastic representation,and adaptive classification.Specifically,an enhanced Concrete Autoencoder(CAE)is employed to reconstruct and compress high-dimensional network traffic features,producing compact and expressive representations suitable for large-scale 5G deployments.To further improve accuracy in ambiguous traffic classification,a Residual Convolutional Long Short-Term Memory model with an attention mechanism(ResCLA)is introduced,enabling multi-level modeling of spatial–temporal dependencies and effective detection of subtle anomalies.Extensive experiments on benchmark datasets—including 5G-NIDD,CIC-IDS2017,ToN-IoT,and BoT-IoT—demonstrate that SD-5G consistently achieves F1 scores exceeding 99.19%across diverse network environments,indicating strong generalization and real-time deployment capabilities.Overall,SD-5G achieves a balance between detection accuracy and deployment efficiency,offering a scalable,flexible,and effective solution for intrusion detection in 5G and next-generation networks. 展开更多
关键词 5G security network intrusion detection feature engineering deep learning
在线阅读 下载PDF
Application of deep learning-based convolutional neural networks in gastrointestinal disease endoscopic examination
18
作者 Yang-Yang Wang Bin Liu Ji-Han Wang 《World Journal of Gastroenterology》 2025年第36期50-69,共20页
Gastrointestinal(GI)diseases,including gastric and colorectal cancers,signi-ficantly impact global health,necessitating accurate and efficient diagnostic me-thods.Endoscopic examination is the primary diagnostic tool;... Gastrointestinal(GI)diseases,including gastric and colorectal cancers,signi-ficantly impact global health,necessitating accurate and efficient diagnostic me-thods.Endoscopic examination is the primary diagnostic tool;however,its accu-racy is limited by operator dependency and interobserver variability.Advance-ments in deep learning,particularly convolutional neural networks(CNNs),show great potential for enhancing GI disease detection and classification.This review explores the application of CNNs in endoscopic imaging,focusing on polyp and tumor detection,disease classification,endoscopic ultrasound,and capsule endo-scopy analysis.We discuss the performance of CNN models with traditional dia-gnostic methods,highlighting their advantages in accuracy and real-time decision support.Despite promising results,challenges remain,including data availability,model interpretability,and clinical integration.Future directions include impro-ving model generalization,enhancing explainability,and conducting large-scale clinical trials.With continued advancements,CNN-powered artificial intelligence systems could revolutionize GI endoscopy by enhancing early disease detection,reducing diagnostic errors,and improving patient outcomes. 展开更多
关键词 Gastrointestinal diseases Endoscopic examination deep learning Convolutional neural networks Computer-aided diagnosis
在线阅读 下载PDF
Demand Forecasting of a Microgrid-Powered Electric Vehicle Charging Station Enabled by Emerging Technologies and Deep Recurrent Neural Networks
19
作者 Sahbi Boubaker Adel Mellit +3 位作者 Nejib Ghazouani Walid Meskine Mohamed Benghanem Habib Kraiem 《Computer Modeling in Engineering & Sciences》 2025年第5期2237-2259,共23页
Electric vehicles(EVs)are gradually being deployed in the transportation sector.Although they have a high impact on reducing greenhouse gas emissions,their penetration is challenged by their random energy demand and d... Electric vehicles(EVs)are gradually being deployed in the transportation sector.Although they have a high impact on reducing greenhouse gas emissions,their penetration is challenged by their random energy demand and difficult scheduling of their optimal charging.To cope with these problems,this paper presents a novel approach for photovoltaic grid-connected microgrid EV charging station energy demand forecasting.The present study is part of a comprehensive framework involving emerging technologies such as drones and artificial intelligence designed to support the EVs’charging scheduling task.By using predictive algorithms for solar generation and load demand estimation,this approach aimed at ensuring dynamic and efficient energy flow between the solar energy source,the grid and the electric vehicles.The main contribution of this paper lies in developing an intelligent approach based on deep recurrent neural networks to forecast the energy demand using only its previous records.Therefore,various forecasters based on Long Short-term Memory,Gated Recurrent Unit,and their bi-directional and stacked variants were investigated using a real dataset collected from an EV charging station located at Trieste University(Italy).The developed forecasters have been evaluated and compared according to different metrics,including R,RMSE,MAE,and MAPE.We found that the obtained R values for both PV power generation and energy demand ranged between 97%and 98%.These study findings can be used for reliable and efficient decision-making on the management side of the optimal scheduling of the charging operations. 展开更多
关键词 MICROGRID electric vehicles charging station forecasting deep recurrent neural networks energy management system
在线阅读 下载PDF
Deep Reinforcement Learning Approach for X-rudder AUVs Fault Diagnosis Based on Deep Q-network
20
作者 Chuanfa Chen Xiang Gao +3 位作者 Yueming Li Xuezhi Chen Jian Cao Yinghao Zhang 《哈尔滨工程大学学报(英文版)》 2025年第6期1239-1251,共13页
The rudder mechanism of the X-rudder autonomous underwater cehicle(AUV)is relatively complex,and fault diagnosis capability is an important guarantee for its task execution in complex underwater environments.However,t... The rudder mechanism of the X-rudder autonomous underwater cehicle(AUV)is relatively complex,and fault diagnosis capability is an important guarantee for its task execution in complex underwater environments.However,traditional fault diagnosis methods currently rely on prior knowledge and expert experience,and lack accuracy.In order to improve the autonomy and accuracy of fault diagnosis methods,and overcome the shortcomings of traditional algorithms,this paper proposes an X-steering AUV fault diagnosis model based on the deep reinforcement learning deep Q network(DQN)algorithm,which can learn the relationship between state data and fault types,map raw residual data to corresponding fault patterns,and achieve end-to-end mapping.In addition,to solve the problem of few X-steering fault sample data,Dropout technology is introduced during the model training phase to improve the performance of the DQN algorithm.Experimental results show that the proposed model has improved the convergence speed and comprehensive performance indicators compared to the unimproved DQN algorithm,with precision,recall,F_(1-score),and accuracy reaching up to 100%,98.07%,99.02%,and 98.50% respectively,and the model’s accuracy is higher than other machine learning algorithms like back propagation,support vector machine. 展开更多
关键词 Autonomous underwater cehicles X-rudder Fault diagnosis deep Q network Dropout technique
在线阅读 下载PDF
上一页 1 2 250 下一页 到第
使用帮助 返回顶部