期刊文献+
共找到384篇文章
< 1 2 20 >
每页显示 20 50 100
vip Editorial Special Issue on the Next-Generation Deep Learning Approaches to Emerging Real-World Applications
1
作者 Yu Zhou Eneko Osaba Xiao Zhang 《Computers, Materials & Continua》 2025年第7期237-242,共6页
Introduction Deep learning(DL),as one of the most transformative technologies in artificial intelligence(AI),is undergoing a pivotal transition from laboratory research to industrial deployment.Advancing at an unprece... Introduction Deep learning(DL),as one of the most transformative technologies in artificial intelligence(AI),is undergoing a pivotal transition from laboratory research to industrial deployment.Advancing at an unprecedented pace,DL is transcending theoretical and application boundaries to penetrate emerging realworld scenarios such as industrial automation,urban management,and health monitoring,thereby driving a new wave of intelligent transformation.In August 2023,Goldman Sachs estimated that global AI investment will reach US$200 billion by 2025[1].However,the increasing complexity and dynamic nature of application scenarios expose critical challenges in traditional deep learning,including data heterogeneity,insufficient model generalization,computational resource constraints,and privacy-security trade-offs.The next generation of deep learning methodologies needs to achieve breakthroughs in multimodal fusion,lightweight design,interpretability enhancement,and cross-disciplinary collaborative optimization,in order to develop more efficient,robust,and practically valuable intelligent systems. 展开更多
关键词 health monitoringthereby deep learning industrial deployment intelligent transformationin deep learning dl artificial intelligence ai penetrate emerging realworld scenarios transformative technologies
在线阅读 下载PDF
A critical evaluation of deep-learning based phylogenetic inference programs using simulated datasets
2
作者 Yixiao Zhu Yonglin Li +2 位作者 Chuhao Li Xing-Xing Shen Xiaofan Zhou 《Journal of Genetics and Genomics》 2025年第5期714-717,共4页
Inferring phylogenetic trees from molecular sequences is a cornerstone of evolutionary biology.Many standard phylogenetic methods(such as maximum-likelihood[ML])rely on explicit models of sequence evolution and thus o... Inferring phylogenetic trees from molecular sequences is a cornerstone of evolutionary biology.Many standard phylogenetic methods(such as maximum-likelihood[ML])rely on explicit models of sequence evolution and thus often suffer from model misspecification or inadequacy.The on-rising deep learning(DL)techniques offer a powerful alternative.Deep learning employs multi-layered artificial neural networks to progressively transform input data into more abstract and complex representations.DL methods can autonomously uncover meaningful patterns from data,thereby bypassing potential biases introduced by predefined features(Franklin,2005;Murphy,2012).Recent efforts have aimed to apply deep neural networks(DNNs)to phylogenetics,with a growing number of applications in tree reconstruction(Suvorov et al.,2020;Zou et al.,2020;Nesterenko et al.,2022;Smith and Hahn,2023;Wang et al.,2023),substitution model selection(Abadi et al.,2020;Burgstaller-Muehlbacher et al.,2023),and diversification rate inference(Voznica et al.,2022;Lajaaiti et al.,2023;Lambert et al.,2023).In phylogenetic tree reconstruction,PhyDL(Zou et al.,2020)and Tree_learning(Suvorov et al.,2020)are two notable DNN-based programs designed to infer unrooted quartet trees directly from alignments of four amino acid(AA)and DNA sequences,respectively. 展开更多
关键词 phylogenetic inference explicit models sequence evolution deep learning deep learning dl techniques molecular sequences simulated datasets phylogenetic methods such evolutionary biologymany
原文传递
A deep learning model for ocean surface latent heat flux based on transformer and data assimilation
3
作者 Yahui Liu Hengxiao Li Jichao Wang 《Acta Oceanologica Sinica》 2025年第5期115-130,共16页
Efficient and accurate prediction of ocean surface latent heat fluxes is essential for understanding and modeling climate dynamics.Conventional estimation methods have low resolution and lack accuracy.The transformer ... Efficient and accurate prediction of ocean surface latent heat fluxes is essential for understanding and modeling climate dynamics.Conventional estimation methods have low resolution and lack accuracy.The transformer model,with its self-attention mechanism,effectively captures long-range dependencies,leading to a degradation of accuracy over time.Due to the non-linearity and uncertainty of physical processes,the transformer model encounters the problem of error accumulation,leading to a degradation of accuracy over time.To solve this problem,we combine the Data Assimilation(DA)technique with the transformer model and continuously modify the model state to make it closer to the actual observations.In this paper,we propose a deep learning model called TransNetDA,which integrates transformer,convolutional neural network and DA methods.By combining data-driven and DA methods for spatiotemporal prediction,TransNetDA effectively extracts multi-scale spatial features and significantly improves prediction accuracy.The experimental results indicate that the TransNetDA method surpasses traditional techniques in terms of root mean square error and R2 metrics,showcasing its superior performance in predicting latent heat fluxes at the ocean surface. 展开更多
关键词 climate dynamics deep learning(dl) Data Assimilation(DA) TRANSFORMER ensemble Kalman filter ocean surface latent heat flux
在线阅读 下载PDF
Deep Learning Models for Detecting Cheating in Online Exams
4
作者 Siham Essahraui Ismail Lamaakal +6 位作者 Yassine Maleh Khalid El Makkaoui Mouncef Filali Bouami Ibrahim Ouahbi May Almousa Ali Abdullah S.Al Qahtani Ahmed A.Abd El-Latif 《Computers, Materials & Continua》 2025年第11期3151-3183,共33页
The rapid shift to online education has introduced significant challenges to maintaining academic integrity in remote assessments,as traditional proctoring methods fall short in preventing cheating.The increase in che... The rapid shift to online education has introduced significant challenges to maintaining academic integrity in remote assessments,as traditional proctoring methods fall short in preventing cheating.The increase in cheating during online exams highlights the need for efficient,adaptable detection models to uphold academic credibility.This paper presents a comprehensive analysis of various deep learning models for cheating detection in online proctoring systems,evaluating their accuracy,efficiency,and adaptability.We benchmark several advanced architectures,including EfficientNet,MobileNetV2,ResNet variants and more,using two specialized datasets(OEP and OP)tailored for online proctoring contexts.Our findings reveal that EfficientNetB1 and YOLOv5 achieve top performance on the OP dataset,with EfficientNetB1 attaining a peak accuracy of 94.59% and YOLOv5 reaching a mean average precision(mAP@0.5)of 98.3%.For the OEP dataset,ResNet50-CBAM,YOLOv5 and EfficientNetB0 stand out,with ResNet50-CBAMachieving an accuracy of 93.61% and EfficientNetB0 showing robust detection performance with balanced accuracy and computational efficiency.These results underscore the importance of selectingmodels that balance accuracy and efficiency,supporting scalable,effective cheating detection in online assessments. 展开更多
关键词 Anti-cheating model computer vision(CV) deep learning(dl) online exam proctoring neural networks facial recognition biometric authentication security of distance education
在线阅读 下载PDF
Enhancing User Experience in AI-Powered Human-Computer Communication with Vocal Emotions Identification Using a Novel Deep Learning Method
5
作者 Ahmed Alhussen Arshiya Sajid Ansari Mohammad Sajid Mohammadi 《Computers, Materials & Continua》 2025年第2期2909-2929,共21页
Voice, motion, and mimicry are naturalistic control modalities that have replaced text or display-driven control in human-computer communication (HCC). Specifically, the vocals contain a lot of knowledge, revealing de... Voice, motion, and mimicry are naturalistic control modalities that have replaced text or display-driven control in human-computer communication (HCC). Specifically, the vocals contain a lot of knowledge, revealing details about the speaker’s goals and desires, as well as their internal condition. Certain vocal characteristics reveal the speaker’s mood, intention, and motivation, while word study assists the speaker’s demand to be understood. Voice emotion recognition has become an essential component of modern HCC networks. Integrating findings from the various disciplines involved in identifying vocal emotions is also challenging. Many sound analysis techniques were developed in the past. Learning about the development of artificial intelligence (AI), and especially Deep Learning (DL) technology, research incorporating real data is becoming increasingly common these days. Thus, this research presents a novel selfish herd optimization-tuned long/short-term memory (SHO-LSTM) strategy to identify vocal emotions in human communication. The RAVDESS public dataset is used to train the suggested SHO-LSTM technique. Mel-frequency cepstral coefficient (MFCC) and wiener filter (WF) techniques are used, respectively, to remove noise and extract features from the data. LSTM and SHO are applied to the extracted data to optimize the LSTM network’s parameters for effective emotion recognition. Python Software was used to execute our proposed framework. In the finding assessment phase, Numerous metrics are used to evaluate the proposed model’s detection capability, Such as F1-score (95%), precision (95%), recall (96%), and accuracy (97%). The suggested approach is tested on a Python platform, and the SHO-LSTM’s outcomes are contrasted with those of other previously conducted research. Based on comparative assessments, our suggested approach outperforms the current approaches in vocal emotion recognition. 展开更多
关键词 Human-computer communication(HCC) vocal emotions live vocal artificial intelligence(AI) deep learning(dl) selfish herd optimization-tuned long/short K term memory(SHO-LSTM)
在线阅读 下载PDF
RFFsNet-SEI:a multidimensional balanced-RFFs deep neural network framework for specific emitter identification 被引量:2
6
作者 FAN Rong SI Chengke +1 位作者 HAN Yi WAN Qun 《Journal of Systems Engineering and Electronics》 SCIE CSCD 2024年第3期558-574,F0002,共18页
Existing specific emitter identification(SEI)methods based on hand-crafted features have drawbacks of losing feature information and involving multiple processing stages,which reduce the identification accuracy of emi... Existing specific emitter identification(SEI)methods based on hand-crafted features have drawbacks of losing feature information and involving multiple processing stages,which reduce the identification accuracy of emitters and complicate the procedures of identification.In this paper,we propose a deep SEI approach via multidimensional feature extraction for radio frequency fingerprints(RFFs),namely,RFFsNet-SEI.Particularly,we extract multidimensional physical RFFs from the received signal by virtue of variational mode decomposition(VMD)and Hilbert transform(HT).The physical RFFs and I-Q data are formed into the balanced-RFFs,which are then used to train RFFsNet-SEI.As introducing model-aided RFFs into neural network,the hybrid-driven scheme including physical features and I-Q data is constructed.It improves physical interpretability of RFFsNet-SEI.Meanwhile,since RFFsNet-SEI identifies individual of emitters from received raw data in end-to-end,it accelerates SEI implementation and simplifies procedures of identification.Moreover,as the temporal features and spectral features of the received signal are both extracted by RFFsNet-SEI,identification accuracy is improved.Finally,we compare RFFsNet-SEI with the counterparts in terms of identification accuracy,computational complexity,and prediction speed.Experimental results illustrate that the proposed method outperforms the counterparts on the basis of simulation dataset and real dataset collected in the anechoic chamber. 展开更多
关键词 specific emitter identification(SEI) deep learning(dl) radio frequency fingerprint(RFF) multidimensional feature extraction(MFE) variational mode decomposition(VMD)
在线阅读 下载PDF
Spectrum Prediction Based on GAN and Deep Transfer Learning:A Cross-Band Data Augmentation Framework 被引量:7
7
作者 Fandi Lin Jin Chen +3 位作者 Guoru Ding Yutao Jiao Jiachen Sun Haichao Wang 《China Communications》 SCIE CSCD 2021年第1期18-32,共15页
This paper investigates the problem of data scarcity in spectrum prediction.A cognitive radio equipment may frequently switch the target frequency as the electromagnetic environment changes.The previously trained mode... This paper investigates the problem of data scarcity in spectrum prediction.A cognitive radio equipment may frequently switch the target frequency as the electromagnetic environment changes.The previously trained model for prediction often cannot maintain a good performance when facing small amount of historical data of the new target frequency.Moreover,the cognitive radio equipment usually implements the dynamic spectrum access in real time which means the time to recollect the data of the new task frequency band and retrain the model is very limited.To address the above issues,we develop a crossband data augmentation framework for spectrum prediction by leveraging the recent advances of generative adversarial network(GAN)and deep transfer learning.Firstly,through the similarity measurement,we pre-train a GAN model using the historical data of the frequency band that is the most similar to the target frequency band.Then,through the data augmentation by feeding the small amount of the target data into the pre-trained GAN,temporal-spectral residual network is further trained using deep transfer learning and the generated data with high similarity from GAN.Finally,experiment results demonstrate the effectiveness of the proposed framework. 展开更多
关键词 cognitive radio cross-band spectrum prediction deep transfer learning generative adversarial network cross-band data augmentation framework
在线阅读 下载PDF
Deep learning algorithm featuring continuous learning for modulation classifications in wireless networks
8
作者 WU Nan SUN Yu WANG Xudong 《太赫兹科学与电子信息学报》 2024年第2期209-218,共10页
Although modulation classification based on deep neural network can achieve high Modulation Classification(MC)accuracies,catastrophic forgetting will occur when the neural network model continues to learn new tasks.In... Although modulation classification based on deep neural network can achieve high Modulation Classification(MC)accuracies,catastrophic forgetting will occur when the neural network model continues to learn new tasks.In this paper,we simulate the dynamic wireless communication environment and focus on breaking the learning paradigm of isolated automatic MC.We innovate a research algorithm for continuous automatic MC.Firstly,a memory for storing representative old task modulation signals is built,which is employed to limit the gradient update direction of new tasks in the continuous learning stage to ensure that the loss of old tasks is also in a downward trend.Secondly,in order to better simulate the dynamic wireless communication environment,we employ the mini-batch gradient algorithm which is more suitable for continuous learning.Finally,the signal in the memory can be replayed to further strengthen the characteristics of the old task signal in the model.Simulation results verify the effectiveness of the method. 展开更多
关键词 deep learning(dl) modulation classification continuous learning catastrophic forgetting cognitive radio communications
在线阅读 下载PDF
Probabilistic Automata-Based Method for Enhancing Performance of Deep Reinforcement Learning Systems
9
作者 Min Yang Guanjun Liu +1 位作者 Ziyuan Zhou Jiacun Wang 《IEEE/CAA Journal of Automatica Sinica》 SCIE EI CSCD 2024年第11期2327-2339,共13页
Deep reinforcement learning(DRL) has demonstrated significant potential in industrial manufacturing domains such as workshop scheduling and energy system management.However, due to the model's inherent uncertainty... Deep reinforcement learning(DRL) has demonstrated significant potential in industrial manufacturing domains such as workshop scheduling and energy system management.However, due to the model's inherent uncertainty, rigorous validation is requisite for its application in real-world tasks. Specific tests may reveal inadequacies in the performance of pre-trained DRL models, while the “black-box” nature of DRL poses a challenge for testing model behavior. We propose a novel performance improvement framework based on probabilistic automata,which aims to proactively identify and correct critical vulnerabilities of DRL systems, so that the performance of DRL models in real tasks can be improved with minimal model modifications.First, a probabilistic automaton is constructed from the historical trajectory of the DRL system by abstracting the state to generate probabilistic decision-making units(PDMUs), and a reverse breadth-first search(BFS) method is used to identify the key PDMU-action pairs that have the greatest impact on adverse outcomes. This process relies only on the state-action sequence and final result of each trajectory. Then, under the key PDMU, we search for the new action that has the greatest impact on favorable results. Finally, the key PDMU, undesirable action and new action are encapsulated as monitors to guide the DRL system to obtain more favorable results through real-time monitoring and correction mechanisms. Evaluations in two standard reinforcement learning environments and three actual job scheduling scenarios confirmed the effectiveness of the method, providing certain guarantees for the deployment of DRL models in real-world applications. 展开更多
关键词 deep reinforcement learning(DRL) performance improvement framework probabilistic automata real-time monitoring the key probabilistic decision-making units(PDMU)-action pair
在线阅读 下载PDF
Geomagnetic Data Denoising Based on Deep Residual Shrinkage Network
10
作者 Zhang Bin Yang Chao +2 位作者 Zheng Hao-Hao Yan Jia-Yong Ma Chang-Ying 《Applied Geophysics》 2025年第3期820-834,897,共16页
Geomagnetic data hold significant value in fields such as earthquake monitoring and deep earth exploration.However,the increasing severity of anthropogenic noise contamination in existing geomagnetic observatory data ... Geomagnetic data hold significant value in fields such as earthquake monitoring and deep earth exploration.However,the increasing severity of anthropogenic noise contamination in existing geomagnetic observatory data poses substantial challenges to high-precision computational analysis of geomagnetic data.To overcome this problem,we propose a denoising method for geomagnetic data based on the Residual Shrinkage Network(RSN).We construct a sample library of simulated and measured geomagnetic data develop and train the RSN denoising network.Through its unique soft thresholding module,RSN adaptively learns and removes noise from the data,effectively improving data quality.In experiments with noise-added measured data,RSN enhances the quality of the noisy data by approximately 12 dB on average.The proposed method is further validated through denoising analysis on measured data by comparing results of time-domain sequences,multiple square coherence and geomagnetic transfer functions. 展开更多
关键词 residual shrinkage network(RSN) signal processing geomagnetic signal denoising electromagnetic exploration deep learning(dl)
在线阅读 下载PDF
A framework for locating multiple RFID tags using RF hologram tensors
11
作者 Xiangyu Wang Jian Zhang +2 位作者 Shiwen Mao Senthilkumar CG Periaswamy Justin Patton 《Digital Communications and Networks》 2025年第2期337-348,共12页
In this paper,we present a Deep Neural Network(DNN)based framework that employs Radio Frequency(RF)hologram tensors to locate multiple Ultra-High Frequency(UHF)passive Radio-Frequency Identification(RFID)tags.The RF h... In this paper,we present a Deep Neural Network(DNN)based framework that employs Radio Frequency(RF)hologram tensors to locate multiple Ultra-High Frequency(UHF)passive Radio-Frequency Identification(RFID)tags.The RF hologram tensor exhibits a strong relationship between observation and spatial location,helping to improve the robustness to dynamic environments and equipment.Since RFID data is often marred by noise,we implement two types of deep neural network architectures to clean up the RF hologram tensor.Leveraging the spatial relationship between tags,the deep networks effectively mitigate fake peaks in the hologram tensors resulting from multipath propagation and phase wrapping.In contrast to fingerprinting-based localization systems that use deep networks as classifiers,our deep networks in the proposed framework treat the localization task as a regression problem preserving the ambiguity between fingerprints.We also present an intuitive peak finding algorithm to obtain estimated locations using the sanitized hologram tensors.The proposed framework is implemented using commodity RFID devices,and its superior performance is validated through extensive experiments. 展开更多
关键词 Radio-frequency identification(RFID) Ultra-high frequency(UHF)passive RFID tag RF hologram tensor Indoor localization deep learning(dl) Swin Transformer Self-supervised learning
在线阅读 下载PDF
Forecasting hourly PM_(2.5)concentrations based on decomposition-ensemble-reconstruction framework incorporating deep learning algorithms
12
作者 Peilei Cai Chengyuan Zhang Jian Chai 《Data Science and Management》 2023年第1期46-54,共9页
Accurate predictions of hourly PM_(2.5)concentrations are crucial for preventing the harmful effects of air pollution.In this study,a new decomposition-ensemble framework incorporating the variational mode decompositi... Accurate predictions of hourly PM_(2.5)concentrations are crucial for preventing the harmful effects of air pollution.In this study,a new decomposition-ensemble framework incorporating the variational mode decomposition method(VMD),econometric forecasting method(autoregressive integrated moving average model,ARIMA),and deep learning techniques(convolutional neural networks(CNN)and temporal convolutional network(TCN))was developed to model the data characteristics of hourly PM_(2.5)concentrations.Taking the PM_(2.5)concentration of Lanzhou,Gansu Province,China as the sample,the empirical results demonstrated that the developed decomposition-ensemble framework is significantly superior to the benchmarks with the econometric model,machine learning models,basic deep learning models,and traditional decomposition-ensemble models,within one-,two-,or three-step-ahead.This study verified the effectiveness of the new prediction framework to capture the data patterns of PM_(2.5)concentration and can be employed as a meaningful PM_(2.5)concentrations prediction tool. 展开更多
关键词 PM_(2.5)concentration prediction Decomposition-ensemble-reconstruction framework Variational mode decomposition method deep learning
在线阅读 下载PDF
Recent Progresses in Deep Learning Based Acoustic Models 被引量:10
13
作者 Dong Yu Jinyu Li 《IEEE/CAA Journal of Automatica Sinica》 SCIE EI CSCD 2017年第3期396-409,共14页
In this paper,we summarize recent progresses made in deep learning based acoustic models and the motivation and insights behind the surveyed techniques.We first discuss models such as recurrent neural networks(RNNs) a... In this paper,we summarize recent progresses made in deep learning based acoustic models and the motivation and insights behind the surveyed techniques.We first discuss models such as recurrent neural networks(RNNs) and convolutional neural networks(CNNs) that can effectively exploit variablelength contextual information,and their various combination with other models.We then describe models that are optimized end-to-end and emphasize on feature representations learned jointly with the rest of the system,the connectionist temporal classification(CTC) criterion,and the attention-based sequenceto-sequence translation model.We further illustrate robustness issues in speech recognition systems,and discuss acoustic model adaptation,speech enhancement and separation,and robust training strategies.We also cover modeling techniques that lead to more efficient decoding and discuss possible future directions in acoustic model research. 展开更多
关键词 Attention model convolutional neural network(CNN) connectionist temporal classification(CTC) deep learning(dl) long short-term memory(LSTM) permutation invariant training speech adaptation speech processing speech recognition speech separation
在线阅读 下载PDF
Optimizing Deep Learning Parameters Using Genetic Algorithm for Object Recognition and Robot Grasping 被引量:2
14
作者 Delowar Hossain Genci Capi Mitsuru Jindai 《Journal of Electronic Science and Technology》 CAS CSCD 2018年第1期11-15,共5页
The performance of deep learning(DL)networks has been increased by elaborating the network structures. However, the DL netowrks have many parameters, which have a lot of influence on the performance of the network. We... The performance of deep learning(DL)networks has been increased by elaborating the network structures. However, the DL netowrks have many parameters, which have a lot of influence on the performance of the network. We propose a genetic algorithm(GA) based deep belief neural network(DBNN) method for robot object recognition and grasping purpose. This method optimizes the parameters of the DBNN method, such as the number of hidden units, the number of epochs, and the learning rates, which would reduce the error rate and the network training time of object recognition. After recognizing objects, the robot performs the pick-andplace operations. We build a database of six objects for experimental purpose. Experimental results demonstrate that our method outperforms on the optimized robot object recognition and grasping tasks. 展开更多
关键词 deep learning(dl) deep belief neural network(DBNN) genetic algorithm(GA) object recognition robot grasping
在线阅读 下载PDF
Identifi cation of paralytic shellfi sh toxin-producing microalgae using machine learning and deep learning methods 被引量:3
15
作者 Wei XU Jie NIU +4 位作者 Wenyu GAN Siyu GOU Shuai ZHANG Han QIU Tianjiu JIANG 《Journal of Oceanology and Limnology》 SCIE CAS CSCD 2022年第6期2202-2217,共16页
Paralytic shellfi sh poisoning(PSP)microalgae,as one of the harmful algal blooms,causes great damage to the of fshore fi shery,marine culture,and marine ecological environment.At present,there is no technique for real... Paralytic shellfi sh poisoning(PSP)microalgae,as one of the harmful algal blooms,causes great damage to the of fshore fi shery,marine culture,and marine ecological environment.At present,there is no technique for real-time accurate identifi cation of toxic microalgae,by combining three-dimensional fluorescence with machine learning(ML)and deep learning(DL),we developed methods to classify the PSP and non-PSP microalgae.The average classifi cation accuracies of these two methods for microalgae are above 90%,and the accuracies for discriminating 12 microalgae species in PSP and non-PSP microalgae are above 94%.When the emission wavelength is 650-690 nm,the fl uorescence characteristics bands(excitation wavelength)occur dif ferently at 410-480 nm and 500-560 nm for PSP and non-PSP microalgae,respectively.The identification accuracies of ML models(support vector machine(SVM),and k-nearest neighbor rule(k-NN)),and DL model(convolutional neural network(CNN))to PSP microalgae are 96.25%,96.36%,and 95.88%respectively,indicating that ML and DL are suitable for the classifi cation of toxic microalgae. 展开更多
关键词 paralytic shellfi sh poisoning(PSP) machine learning(ML) deep learning(dl) toxic algal classifi cation
在线阅读 下载PDF
Deep learning for fast channel estimation in millimeter-wave MIMO systems 被引量:3
16
作者 LYU Siting LI Xiaohui +2 位作者 FAN Tao LIU Jiawen SHI Mingli 《Journal of Systems Engineering and Electronics》 SCIE EI CSCD 2022年第6期1088-1095,共8页
Channel estimation has been considered as a key issue in the millimeter-wave(mmWave)massive multi-input multioutput(MIMO)communication systems,which becomes more challenging with a large number of antennas.In this pap... Channel estimation has been considered as a key issue in the millimeter-wave(mmWave)massive multi-input multioutput(MIMO)communication systems,which becomes more challenging with a large number of antennas.In this paper,we propose a deep learning(DL)-based fast channel estimation method for mmWave massive MIMO systems.The proposed method can directly and effectively estimate channel state information(CSI)from received data without performing pilot signals estimate in advance,which simplifies the estimation process.Specifically,we develop a convolutional neural network(CNN)-based channel estimation network for the case of dimensional mismatch of input and output data,subsequently denoted as channel(H)neural network(HNN).It can quickly estimate the channel information by learning the inherent characteristics of the received data and the relationship between the received data and the channel,while the dimension of the received data is much smaller than the channel matrix.Simulation results show that the proposed HNN can gain better channel estimation accuracy compared with existing schemes. 展开更多
关键词 millimeter-wave(mmWave) channel estimation deep learning(dl) dimensional mismatch channel state information(CSI)
在线阅读 下载PDF
Deep learning-based time-varying channel estimation with basis expansion model for MIMO-OFDM system 被引量:2
17
作者 HU Bo YANG Lihua +1 位作者 REN Lulu NIE Qian 《High Technology Letters》 EI CAS 2022年第3期288-294,共7页
For high-speed mobile MIMO-OFDM system,a low-complexity deep learning(DL) based timevarying channel estimation scheme is proposed.To reduce the number of estimated parameters,the basis expansion model(BEM) is employed... For high-speed mobile MIMO-OFDM system,a low-complexity deep learning(DL) based timevarying channel estimation scheme is proposed.To reduce the number of estimated parameters,the basis expansion model(BEM) is employed to model the time-varying channel,which converts the channel estimation into the estimation of the basis coefficient.Specifically,the initial basis coefficients are firstly used to train the neural network in an offline manner,and then the high-precision channel estimation can be obtained by small number of inputs.Moreover,the linear minimum mean square error(LMMSE) estimated channel is considered for the loss function in training phase,which makes the proposed method more practical.Simulation results show that the proposed method has a better performance and lower computational complexity compared with the available schemes,and it is robust to the fast time-varying channel in the high-speed mobile scenarios. 展开更多
关键词 MIMO-OFDM high-speed mobile time-varying channel deep learning(dl) basis expansion model(BEM)
在线阅读 下载PDF
A Hierarchy Distributed-Agents Model for Network Risk Evaluation Based on Deep Learning 被引量:1
18
作者 Jin Yang Tao Li +2 位作者 Gang Liang Wenbo He Yue Zhao 《Computer Modeling in Engineering & Sciences》 SCIE EI 2019年第7期1-23,共23页
Deep Learning presents a critical capability to be geared into environments being constantly changed and ongoing learning dynamic,which is especially relevant in Network Intrusion Detection.In this paper,as enlightene... Deep Learning presents a critical capability to be geared into environments being constantly changed and ongoing learning dynamic,which is especially relevant in Network Intrusion Detection.In this paper,as enlightened by the theory of Deep Learning Neural Networks,Hierarchy Distributed-Agents Model for Network Risk Evaluation,a newly developed model,is proposed.The architecture taken on by the distributed-agents model are given,as well as the approach of analyzing network intrusion detection using Deep Learning,the mechanism of sharing hyper-parameters to improve the efficiency of learning is presented,and the hierarchical evaluative framework for Network Risk Evaluation of the proposed model is built.Furthermore,to examine the proposed model,a series of experiments were conducted in terms of NSLKDD datasets.The proposed model was able to differentiate between normal and abnormal network activities with an accuracy of 97.60%on NSL-KDD datasets.As the results acquired from the experiment indicate,the model developed in this paper is characterized by high-speed and high-accuracy processing which shall offer a preferable solution with regard to the Risk Evaluation in Network. 展开更多
关键词 Network security deep learning(dl) INTRUSION detection system(IDS) DISTRIBUTED AGENTS
在线阅读 下载PDF
Effectiveness of Deep Learning Algorithms in Phishing Attack Detection for Cybersecurity Frameworks
19
作者 Mitra Penmetsa Jayakeshav Reddy Bhumireddy +3 位作者 Rajiv Chalasani Srikanth Reddy Vangala Ram Mohan Polam Bhavana Kamarthapu 《Journal of Data Analysis and Information Processing》 2025年第3期331-346,共16页
The widespread use of internet technologies is limited because people are worried about cybersecurity.With phishing,cyber criminals pose as reputable entities to trick users and access important information.Standard d... The widespread use of internet technologies is limited because people are worried about cybersecurity.With phishing,cyber criminals pose as reputable entities to trick users and access important information.Standard detection approaches are difficult to follow along with the constantly changing strategies of cybercriminals.A new phishing attack detection framework is presented in this research,using the Gated Recurrent Unit(GRU)Artificial Intelligence(AI)model.Labels have been added to the Uniform Resource Locators(URLs)in the PhishTank dataset,so the model learns what is phishing and what is not.A good data preprocessing method involving feature extraction,dealing with missing data,and running outlier detection checks is applied to maintain high data quality.The performance of the GRU model is outstanding,reaching 98.01%accuracy,F1-score of 98.14%,98.41%recall,as well as 98.67%precision,better than that of classical Machine Learning(ML)methods,including Adaptive Boosting(AdaBoost)and Long Short-Term Memory(LSTM).The proposed approach correctly handles dependencies among elements in a URL,resulting in a strong method for detecting phishing pages.Results from experiments verify the model’s potential in accurately identifying phishing attacks,offering significant advancements in cybersecurity defense systems. 展开更多
关键词 CYBERSECURITY Phishing Attacks Machine learning deep learning(dl) GRU PhishTank Data Cyber Attack Defense
在线阅读 下载PDF
基于EasyDL平台的甘肃陇南核桃主要病害诊断模型的构建及应用 被引量:2
20
作者 满自红 王志成 +4 位作者 陈耀年 王让军 王一峰 王明霞 尚素琴 《西北农业学报》 CAS CSCD 北大核心 2024年第5期971-980,共10页
旨在解决甘肃陇南地区核桃产业中病虫害准确鉴定的问题,以提高种植户核桃园管理的水平和能力。基于自动化深度学习技术(Automated deep learning technology,AutoDL),利用机器学习模型搜索(Model Search)实现自动化人工智能(Automated a... 旨在解决甘肃陇南地区核桃产业中病虫害准确鉴定的问题,以提高种植户核桃园管理的水平和能力。基于自动化深度学习技术(Automated deep learning technology,AutoDL),利用机器学习模型搜索(Model Search)实现自动化人工智能(Automated artificial intelligence,AutoML)算法框架,基于飞桨开源深度学习平台Easy DL构建甘肃陇南核桃主要病害的诊断模型,并进行诊断精度的模型训练。结果显示,共246张训练集进入模型,鉴定为9种常见核桃病害,模型部署在API公用云上,通过微信小程序或浏览器运行。经训练,其诊断准确率达95%以上。说明通过EasyDL构建的陇南地区核桃上常见病害模型运行可靠,能够为核桃种植户提供准确的病害诊断,从而很好地指导种植户提高管理核桃园的水平和应付突发植保问题的能力,以便及时、迅速采取综合防治措施,最大程度地减少因病害造成的经济损失。同时也是相关从业人员和基层研究人员解决核桃植保问题的得力辅助工具。 展开更多
关键词 Easydl 病害诊断 深度学习技术 综合防治 甘肃
在线阅读 下载PDF
上一页 1 2 20 下一页 到第
使用帮助 返回顶部