期刊文献+
共找到417篇文章
< 1 2 21 >
每页显示 20 50 100
A Deep Learning-Based Framework for Environment-Adaptive Navigation of Size-Adaptable Microswarms
1
作者 Jialin Jiang Lidong Yang +1 位作者 Shihao Yang Li Zhang 《Engineering》 2025年第10期130-138,共9页
Actively controllable microswarms have been a rapidly developing research field with appealing characteristics.Autonomous collision-free navigation of microswarms in confined environments is suitable for various appli... Actively controllable microswarms have been a rapidly developing research field with appealing characteristics.Autonomous collision-free navigation of microswarms in confined environments is suitable for various applications,including targeted therapy and delivery.However,several challenges remain unaddressed.First,microswarms possess varying dimensions,and a path planning method suitable to swarms with different dimensions is essential to avoid obstacles.Second,studies on the environment-adaptive navigation of reconfigurable microswarms are limited.Therefore,the planning of the pattern distribution of microswarms based on the local working environment should be examined.This study proposes a deep learning(DL)-based environment-adaptive navigation scheme for swarms.The controller provides reference moving directions for swarms of different sizes in static and dynamic scenarios.Moreover,a pattern-distribution planner was designed to navigate transformable swarms in unstructured environments.To validate the proposed scheme,we applied Fe3O4 nanoparticles swarms as a case study.The proposed scheme enables motion and pattern planning for microrobots of multiple sizes and reconfigurability in various working environments,which could foster a general navigation system for reconfigurable microswarms of different sizes. 展开更多
关键词 Microswarms Automatic navigation deep learning(dl)
在线阅读 下载PDF
Spectrum Prediction Based on GAN and Deep Transfer Learning:A Cross-Band Data Augmentation Framework 被引量:7
2
作者 Fandi Lin Jin Chen +3 位作者 Guoru Ding Yutao Jiao Jiachen Sun Haichao Wang 《China Communications》 SCIE CSCD 2021年第1期18-32,共15页
This paper investigates the problem of data scarcity in spectrum prediction.A cognitive radio equipment may frequently switch the target frequency as the electromagnetic environment changes.The previously trained mode... This paper investigates the problem of data scarcity in spectrum prediction.A cognitive radio equipment may frequently switch the target frequency as the electromagnetic environment changes.The previously trained model for prediction often cannot maintain a good performance when facing small amount of historical data of the new target frequency.Moreover,the cognitive radio equipment usually implements the dynamic spectrum access in real time which means the time to recollect the data of the new task frequency band and retrain the model is very limited.To address the above issues,we develop a crossband data augmentation framework for spectrum prediction by leveraging the recent advances of generative adversarial network(GAN)and deep transfer learning.Firstly,through the similarity measurement,we pre-train a GAN model using the historical data of the frequency band that is the most similar to the target frequency band.Then,through the data augmentation by feeding the small amount of the target data into the pre-trained GAN,temporal-spectral residual network is further trained using deep transfer learning and the generated data with high similarity from GAN.Finally,experiment results demonstrate the effectiveness of the proposed framework. 展开更多
关键词 cognitive radio cross-band spectrum prediction deep transfer learning generative adversarial network cross-band data augmentation framework
在线阅读 下载PDF
vip Editorial Special Issue on the Next-Generation Deep Learning Approaches to Emerging Real-World Applications
3
作者 Yu Zhou Eneko Osaba Xiao Zhang 《Computers, Materials & Continua》 2025年第7期237-242,共6页
Introduction Deep learning(DL),as one of the most transformative technologies in artificial intelligence(AI),is undergoing a pivotal transition from laboratory research to industrial deployment.Advancing at an unprece... Introduction Deep learning(DL),as one of the most transformative technologies in artificial intelligence(AI),is undergoing a pivotal transition from laboratory research to industrial deployment.Advancing at an unprecedented pace,DL is transcending theoretical and application boundaries to penetrate emerging realworld scenarios such as industrial automation,urban management,and health monitoring,thereby driving a new wave of intelligent transformation.In August 2023,Goldman Sachs estimated that global AI investment will reach US$200 billion by 2025[1].However,the increasing complexity and dynamic nature of application scenarios expose critical challenges in traditional deep learning,including data heterogeneity,insufficient model generalization,computational resource constraints,and privacy-security trade-offs.The next generation of deep learning methodologies needs to achieve breakthroughs in multimodal fusion,lightweight design,interpretability enhancement,and cross-disciplinary collaborative optimization,in order to develop more efficient,robust,and practically valuable intelligent systems. 展开更多
关键词 health monitoringthereby deep learning industrial deployment intelligent transformationin deep learning dl artificial intelligence ai penetrate emerging realworld scenarios transformative technologies
在线阅读 下载PDF
A critical evaluation of deep-learning based phylogenetic inference programs using simulated datasets
4
作者 Yixiao Zhu Yonglin Li +2 位作者 Chuhao Li Xing-Xing Shen Xiaofan Zhou 《Journal of Genetics and Genomics》 2025年第5期714-717,共4页
Inferring phylogenetic trees from molecular sequences is a cornerstone of evolutionary biology.Many standard phylogenetic methods(such as maximum-likelihood[ML])rely on explicit models of sequence evolution and thus o... Inferring phylogenetic trees from molecular sequences is a cornerstone of evolutionary biology.Many standard phylogenetic methods(such as maximum-likelihood[ML])rely on explicit models of sequence evolution and thus often suffer from model misspecification or inadequacy.The on-rising deep learning(DL)techniques offer a powerful alternative.Deep learning employs multi-layered artificial neural networks to progressively transform input data into more abstract and complex representations.DL methods can autonomously uncover meaningful patterns from data,thereby bypassing potential biases introduced by predefined features(Franklin,2005;Murphy,2012).Recent efforts have aimed to apply deep neural networks(DNNs)to phylogenetics,with a growing number of applications in tree reconstruction(Suvorov et al.,2020;Zou et al.,2020;Nesterenko et al.,2022;Smith and Hahn,2023;Wang et al.,2023),substitution model selection(Abadi et al.,2020;Burgstaller-Muehlbacher et al.,2023),and diversification rate inference(Voznica et al.,2022;Lajaaiti et al.,2023;Lambert et al.,2023).In phylogenetic tree reconstruction,PhyDL(Zou et al.,2020)and Tree_learning(Suvorov et al.,2020)are two notable DNN-based programs designed to infer unrooted quartet trees directly from alignments of four amino acid(AA)and DNA sequences,respectively. 展开更多
关键词 phylogenetic inference explicit models sequence evolution deep learning deep learning dl techniques molecular sequences simulated datasets phylogenetic methods such evolutionary biologymany
原文传递
A deep learning model for ocean surface latent heat flux based on transformer and data assimilation
5
作者 Yahui Liu Hengxiao Li Jichao Wang 《Acta Oceanologica Sinica》 2025年第5期115-130,共16页
Efficient and accurate prediction of ocean surface latent heat fluxes is essential for understanding and modeling climate dynamics.Conventional estimation methods have low resolution and lack accuracy.The transformer ... Efficient and accurate prediction of ocean surface latent heat fluxes is essential for understanding and modeling climate dynamics.Conventional estimation methods have low resolution and lack accuracy.The transformer model,with its self-attention mechanism,effectively captures long-range dependencies,leading to a degradation of accuracy over time.Due to the non-linearity and uncertainty of physical processes,the transformer model encounters the problem of error accumulation,leading to a degradation of accuracy over time.To solve this problem,we combine the Data Assimilation(DA)technique with the transformer model and continuously modify the model state to make it closer to the actual observations.In this paper,we propose a deep learning model called TransNetDA,which integrates transformer,convolutional neural network and DA methods.By combining data-driven and DA methods for spatiotemporal prediction,TransNetDA effectively extracts multi-scale spatial features and significantly improves prediction accuracy.The experimental results indicate that the TransNetDA method surpasses traditional techniques in terms of root mean square error and R2 metrics,showcasing its superior performance in predicting latent heat fluxes at the ocean surface. 展开更多
关键词 climate dynamics deep learning(dl) Data Assimilation(DA) TRANSFORMER ensemble Kalman filter ocean surface latent heat flux
在线阅读 下载PDF
Deep Learning Models for Detecting Cheating in Online Exams
6
作者 Siham Essahraui Ismail Lamaakal +6 位作者 Yassine Maleh Khalid El Makkaoui Mouncef Filali Bouami Ibrahim Ouahbi May Almousa Ali Abdullah S.Al Qahtani Ahmed A.Abd El-Latif 《Computers, Materials & Continua》 2025年第11期3151-3183,共33页
The rapid shift to online education has introduced significant challenges to maintaining academic integrity in remote assessments,as traditional proctoring methods fall short in preventing cheating.The increase in che... The rapid shift to online education has introduced significant challenges to maintaining academic integrity in remote assessments,as traditional proctoring methods fall short in preventing cheating.The increase in cheating during online exams highlights the need for efficient,adaptable detection models to uphold academic credibility.This paper presents a comprehensive analysis of various deep learning models for cheating detection in online proctoring systems,evaluating their accuracy,efficiency,and adaptability.We benchmark several advanced architectures,including EfficientNet,MobileNetV2,ResNet variants and more,using two specialized datasets(OEP and OP)tailored for online proctoring contexts.Our findings reveal that EfficientNetB1 and YOLOv5 achieve top performance on the OP dataset,with EfficientNetB1 attaining a peak accuracy of 94.59% and YOLOv5 reaching a mean average precision(mAP@0.5)of 98.3%.For the OEP dataset,ResNet50-CBAM,YOLOv5 and EfficientNetB0 stand out,with ResNet50-CBAMachieving an accuracy of 93.61% and EfficientNetB0 showing robust detection performance with balanced accuracy and computational efficiency.These results underscore the importance of selectingmodels that balance accuracy and efficiency,supporting scalable,effective cheating detection in online assessments. 展开更多
关键词 Anti-cheating model computer vision(CV) deep learning(dl) online exam proctoring neural networks facial recognition biometric authentication security of distance education
在线阅读 下载PDF
Forecasting hourly PM_(2.5)concentrations based on decomposition-ensemble-reconstruction framework incorporating deep learning algorithms 被引量:2
7
作者 Peilei Cai Chengyuan Zhang Jian Chai 《Data Science and Management》 2023年第1期46-54,共9页
Accurate predictions of hourly PM_(2.5)concentrations are crucial for preventing the harmful effects of air pollution.In this study,a new decomposition-ensemble framework incorporating the variational mode decompositi... Accurate predictions of hourly PM_(2.5)concentrations are crucial for preventing the harmful effects of air pollution.In this study,a new decomposition-ensemble framework incorporating the variational mode decomposition method(VMD),econometric forecasting method(autoregressive integrated moving average model,ARIMA),and deep learning techniques(convolutional neural networks(CNN)and temporal convolutional network(TCN))was developed to model the data characteristics of hourly PM_(2.5)concentrations.Taking the PM_(2.5)concentration of Lanzhou,Gansu Province,China as the sample,the empirical results demonstrated that the developed decomposition-ensemble framework is significantly superior to the benchmarks with the econometric model,machine learning models,basic deep learning models,and traditional decomposition-ensemble models,within one-,two-,or three-step-ahead.This study verified the effectiveness of the new prediction framework to capture the data patterns of PM_(2.5)concentration and can be employed as a meaningful PM_(2.5)concentrations prediction tool. 展开更多
关键词 PM_(2.5)concentration prediction Decomposition-ensemble-reconstruction framework Variational mode decomposition method deep learning
在线阅读 下载PDF
Enhancing User Experience in AI-Powered Human-Computer Communication with Vocal Emotions Identification Using a Novel Deep Learning Method
8
作者 Ahmed Alhussen Arshiya Sajid Ansari Mohammad Sajid Mohammadi 《Computers, Materials & Continua》 2025年第2期2909-2929,共21页
Voice, motion, and mimicry are naturalistic control modalities that have replaced text or display-driven control in human-computer communication (HCC). Specifically, the vocals contain a lot of knowledge, revealing de... Voice, motion, and mimicry are naturalistic control modalities that have replaced text or display-driven control in human-computer communication (HCC). Specifically, the vocals contain a lot of knowledge, revealing details about the speaker’s goals and desires, as well as their internal condition. Certain vocal characteristics reveal the speaker’s mood, intention, and motivation, while word study assists the speaker’s demand to be understood. Voice emotion recognition has become an essential component of modern HCC networks. Integrating findings from the various disciplines involved in identifying vocal emotions is also challenging. Many sound analysis techniques were developed in the past. Learning about the development of artificial intelligence (AI), and especially Deep Learning (DL) technology, research incorporating real data is becoming increasingly common these days. Thus, this research presents a novel selfish herd optimization-tuned long/short-term memory (SHO-LSTM) strategy to identify vocal emotions in human communication. The RAVDESS public dataset is used to train the suggested SHO-LSTM technique. Mel-frequency cepstral coefficient (MFCC) and wiener filter (WF) techniques are used, respectively, to remove noise and extract features from the data. LSTM and SHO are applied to the extracted data to optimize the LSTM network’s parameters for effective emotion recognition. Python Software was used to execute our proposed framework. In the finding assessment phase, Numerous metrics are used to evaluate the proposed model’s detection capability, Such as F1-score (95%), precision (95%), recall (96%), and accuracy (97%). The suggested approach is tested on a Python platform, and the SHO-LSTM’s outcomes are contrasted with those of other previously conducted research. Based on comparative assessments, our suggested approach outperforms the current approaches in vocal emotion recognition. 展开更多
关键词 Human-computer communication(HCC) vocal emotions live vocal artificial intelligence(AI) deep learning(dl) selfish herd optimization-tuned long/short K term memory(SHO-LSTM)
在线阅读 下载PDF
Deep Learning in Medical Image Analysis: A Comprehensive Review of Algorithms, Trends, Applications, and Challenges
9
作者 Dawa Chyophel Lepcha Bhawna Goyal +4 位作者 Ayush Dogra Ahmed Alkhayyat Prabhat Kumar Sahu Aaliya Ali Vinay Kukreja 《Computer Modeling in Engineering & Sciences》 2025年第11期1487-1573,共87页
Medical image analysis has become a cornerstone of modern healthcare,driven by the exponential growth of data from imaging modalities such as MRI,CT,PET,ultrasound,and X-ray.Traditional machine learning methods have m... Medical image analysis has become a cornerstone of modern healthcare,driven by the exponential growth of data from imaging modalities such as MRI,CT,PET,ultrasound,and X-ray.Traditional machine learning methods have made early contributions;however,recent advancements in deep learning(DL)have revolutionized the field,offering state-of-the-art performance in image classification,segmentation,detection,fusion,registration,and enhancement.This comprehensive review presents an in-depth analysis of deep learning methodologies applied across medical image analysis tasks,highlighting both foundational models and recent innovations.The article begins by introducing conventional techniques and their limitations,setting the stage for DL-based solutions.Core DL architectures,including Convolutional Neural Networks(CNNs),Recurrent Neural Networks(RNNs),Generative Adversarial Networks(GANs),Vision Transformers(ViTs),and hybrid models,are discussed in detail,including their advantages and domain-specific adaptations.Advanced learning paradigms such as semi-supervised learning,selfsupervised learning,and few-shot learning are explored for their potential to mitigate data annotation challenges in clinical datasets.This review further categorizes major tasks in medical image analysis,elaborating on how DL techniques have enabled precise tumor segmentation,lesion detection,modality fusion,super-resolution,and robust classification across diverse clinical settings.Emphasis is placed on applications in oncology,cardiology,neurology,and infectious diseases,including COVID-19.Challenges such as data scarcity,label imbalance,model generalizability,interpretability,and integration into clinical workflows are critically examined.Ethical considerations,explainable AI(XAI),federated learning,and regulatory compliance are discussed as essential components of real-world deployment.Benchmark datasets,evaluation metrics,and comparative performance analyses are presented to support future research.The article concludes with a forward-looking perspective on the role of foundation models,multimodal learning,edge AI,and bio-inspired computing in the future of medical imaging.Overall,this review serves as a valuable resource for researchers,clinicians,and developers aiming to harness deep learning for intelligent,efficient,and clinically viable medical image analysis. 展开更多
关键词 Medical image analysis deep learning(dl) artificial intelligence(AI) neural networks convolutional neural networks(CNNs) generative adversarial networks(GANs) TRANSFORMERS natural language processing(NLP) computational applications comprehensive analysis
在线阅读 下载PDF
An Enhanced Task Migration Technique Based on Convolutional Neural Network in Machine Learning Framework
10
作者 Hamayun Khan Muhammad Atif Imtiaz +5 位作者 Hira Siddique Muhammad Tausif Afzal Rana Arshad Ali Muhammad Zeeshan Baig Saif ur Rehman Yazed Alsaawy 《Computer Systems Science & Engineering》 2025年第1期317-331,共15页
The migration of tasks aided by machine learning(ML)predictions IN(DPM)is a system-level design technique that is used to reduce energy by enhancing the overall performance of the processor.In this paper,we address th... The migration of tasks aided by machine learning(ML)predictions IN(DPM)is a system-level design technique that is used to reduce energy by enhancing the overall performance of the processor.In this paper,we address the issue of system-level higher task dissipation during the execution of parallel workloads with common deadlines by introducing a machine learning-based framework that includes task migration using energy-efficient earliest deadline first scheduling(EA-EDF).ML-based EA-EDF enhances the overall throughput and optimizes the energy to avoid delay and performance degradation in a multiprocessor system.The proposed system model allocates processors to the ready task set in such a way that their deadlines are guaranteed.A full task migration policy is also integrated to ensure proper task mapping that ensures inter-process linkage among the arrived tasks with the same deadlines.The execution of a task can halt on one CPU and reschedule the execution on a different processor to avoid delay and ensure meeting the deadline.Our approach shows promising potential for machine-learning-based schedulability analysis enables a comparison between different ML models and shows a promising reduction in energy as compared with other ML-aware task migration techniques for SoC like Multi-Layer Feed-Forward Neural Networks(MLFNN)based on convolutional neural network(CNN),Random Forest(RF)and Deep learning(DL)algorithm.The Simulations are conducted using super pipelined microarchitecture of advanced micro devices(AMD)XScale PXA270 using instruction and data cache per core 32 Kbyte I-cache and 32 Kbyte D-cache on various utilization factors(u_(i))12%,31%and 50%.The proposed approach consumes 5.3%less energy when almost half of the CPU is running and on a lower workload consumes 1.04%less energy.The proposed design accumulatively gives significant improvements by reducing the energy dissipation on three clock rates by 4.41%,on 624 MHz by 5.4%and 5.9%on applications operating on 416 and 312 MHz standard operating frequencies. 展开更多
关键词 Convolutional neural network(CNN) energy conversation dynamic thermal management optimization methods ANN multiprocessor systems-on-chips artificial neural networks artificial intelligence multi-layer feed-forward neural network(MLFNN) random forest(RF)and deep learning(dl)
在线阅读 下载PDF
Scalable and Resilient AI Framework for Malware Detection in Software-Defined Internet of Things
11
作者 Maha Abdelhaq Ahmad Sami Al-Shamayleh +2 位作者 Adnan Akhunzada Nikola Ivkovi´c Toobah Hasan 《Computers, Materials & Continua》 2026年第4期1307-1321,共15页
The rapid expansion of the Internet of Things(IoT)and Edge Artificial Intelligence(AI)has redefined automation and connectivity acrossmodern networks.However,the heterogeneity and limited resources of IoT devices expo... The rapid expansion of the Internet of Things(IoT)and Edge Artificial Intelligence(AI)has redefined automation and connectivity acrossmodern networks.However,the heterogeneity and limited resources of IoT devices expose them to increasingly sophisticated and persistentmalware attacks.These adaptive and stealthy threats can evade conventional detection,establish remote control,propagate across devices,exfiltrate sensitive data,and compromise network integrity.This study presents a Software-Defined Internet of Things(SD-IoT)control-plane-based,AI-driven framework that integrates Gated Recurrent Units(GRU)and Long Short-TermMemory(LSTM)networks for efficient detection of evolving multi-vector,malware-driven botnet attacks.The proposed CUDA-enabled hybrid deep learning(DL)framework performs centralized real-time detection without adding computational overhead to IoT nodes.A feature selection strategy combining variable clustering,attribute evaluation,one-R attribute evaluation,correlation analysis,and principal component analysis(PCA)enhances detection accuracy and reduces complexity.The framework is rigorously evaluated using the N_BaIoT dataset under k-fold cross-validation.Experimental results achieve 99.96%detection accuracy,a false positive rate(FPR)of 0.0035%,and a detection latency of 0.18 ms,confirming its high efficiency and scalability.The findings demonstrate the framework’s potential as a robust and intelligent security solution for next-generation IoT ecosystems. 展开更多
关键词 AI-driven malware analysis advanced persistent malware(APM) AI-poweredmalware detection deep learning(dl) malware-driven botnets software-defined internet of things(SD-IoT)
在线阅读 下载PDF
Recent Progresses in Deep Learning Based Acoustic Models 被引量:11
12
作者 Dong Yu Jinyu Li 《IEEE/CAA Journal of Automatica Sinica》 SCIE EI CSCD 2017年第3期396-409,共14页
In this paper,we summarize recent progresses made in deep learning based acoustic models and the motivation and insights behind the surveyed techniques.We first discuss models such as recurrent neural networks(RNNs) a... In this paper,we summarize recent progresses made in deep learning based acoustic models and the motivation and insights behind the surveyed techniques.We first discuss models such as recurrent neural networks(RNNs) and convolutional neural networks(CNNs) that can effectively exploit variablelength contextual information,and their various combination with other models.We then describe models that are optimized end-to-end and emphasize on feature representations learned jointly with the rest of the system,the connectionist temporal classification(CTC) criterion,and the attention-based sequenceto-sequence translation model.We further illustrate robustness issues in speech recognition systems,and discuss acoustic model adaptation,speech enhancement and separation,and robust training strategies.We also cover modeling techniques that lead to more efficient decoding and discuss possible future directions in acoustic model research. 展开更多
关键词 Attention model convolutional neural network(CNN) connectionist temporal classification(CTC) deep learning(dl) long short-term memory(LSTM) permutation invariant training speech adaptation speech processing speech recognition speech separation
在线阅读 下载PDF
Optimizing Deep Learning Parameters Using Genetic Algorithm for Object Recognition and Robot Grasping 被引量:2
13
作者 Delowar Hossain Genci Capi Mitsuru Jindai 《Journal of Electronic Science and Technology》 CAS CSCD 2018年第1期11-15,共5页
The performance of deep learning(DL)networks has been increased by elaborating the network structures. However, the DL netowrks have many parameters, which have a lot of influence on the performance of the network. We... The performance of deep learning(DL)networks has been increased by elaborating the network structures. However, the DL netowrks have many parameters, which have a lot of influence on the performance of the network. We propose a genetic algorithm(GA) based deep belief neural network(DBNN) method for robot object recognition and grasping purpose. This method optimizes the parameters of the DBNN method, such as the number of hidden units, the number of epochs, and the learning rates, which would reduce the error rate and the network training time of object recognition. After recognizing objects, the robot performs the pick-andplace operations. We build a database of six objects for experimental purpose. Experimental results demonstrate that our method outperforms on the optimized robot object recognition and grasping tasks. 展开更多
关键词 deep learning(dl) deep belief neural network(DBNN) genetic algorithm(GA) object recognition robot grasping
在线阅读 下载PDF
Identifi cation of paralytic shellfi sh toxin-producing microalgae using machine learning and deep learning methods 被引量:3
14
作者 Wei XU Jie NIU +4 位作者 Wenyu GAN Siyu GOU Shuai ZHANG Han QIU Tianjiu JIANG 《Journal of Oceanology and Limnology》 SCIE CAS CSCD 2022年第6期2202-2217,共16页
Paralytic shellfi sh poisoning(PSP)microalgae,as one of the harmful algal blooms,causes great damage to the of fshore fi shery,marine culture,and marine ecological environment.At present,there is no technique for real... Paralytic shellfi sh poisoning(PSP)microalgae,as one of the harmful algal blooms,causes great damage to the of fshore fi shery,marine culture,and marine ecological environment.At present,there is no technique for real-time accurate identifi cation of toxic microalgae,by combining three-dimensional fluorescence with machine learning(ML)and deep learning(DL),we developed methods to classify the PSP and non-PSP microalgae.The average classifi cation accuracies of these two methods for microalgae are above 90%,and the accuracies for discriminating 12 microalgae species in PSP and non-PSP microalgae are above 94%.When the emission wavelength is 650-690 nm,the fl uorescence characteristics bands(excitation wavelength)occur dif ferently at 410-480 nm and 500-560 nm for PSP and non-PSP microalgae,respectively.The identification accuracies of ML models(support vector machine(SVM),and k-nearest neighbor rule(k-NN)),and DL model(convolutional neural network(CNN))to PSP microalgae are 96.25%,96.36%,and 95.88%respectively,indicating that ML and DL are suitable for the classifi cation of toxic microalgae. 展开更多
关键词 paralytic shellfi sh poisoning(PSP) machine learning(ML) deep learning(dl) toxic algal classifi cation
在线阅读 下载PDF
Deep learning for fast channel estimation in millimeter-wave MIMO systems 被引量:3
15
作者 LYU Siting LI Xiaohui +2 位作者 FAN Tao LIU Jiawen SHI Mingli 《Journal of Systems Engineering and Electronics》 SCIE EI CSCD 2022年第6期1088-1095,共8页
Channel estimation has been considered as a key issue in the millimeter-wave(mmWave)massive multi-input multioutput(MIMO)communication systems,which becomes more challenging with a large number of antennas.In this pap... Channel estimation has been considered as a key issue in the millimeter-wave(mmWave)massive multi-input multioutput(MIMO)communication systems,which becomes more challenging with a large number of antennas.In this paper,we propose a deep learning(DL)-based fast channel estimation method for mmWave massive MIMO systems.The proposed method can directly and effectively estimate channel state information(CSI)from received data without performing pilot signals estimate in advance,which simplifies the estimation process.Specifically,we develop a convolutional neural network(CNN)-based channel estimation network for the case of dimensional mismatch of input and output data,subsequently denoted as channel(H)neural network(HNN).It can quickly estimate the channel information by learning the inherent characteristics of the received data and the relationship between the received data and the channel,while the dimension of the received data is much smaller than the channel matrix.Simulation results show that the proposed HNN can gain better channel estimation accuracy compared with existing schemes. 展开更多
关键词 millimeter-wave(mmWave) channel estimation deep learning(dl) dimensional mismatch channel state information(CSI)
在线阅读 下载PDF
RFFsNet-SEI:a multidimensional balanced-RFFs deep neural network framework for specific emitter identification 被引量:3
16
作者 FAN Rong SI Chengke +1 位作者 HAN Yi WAN Qun 《Journal of Systems Engineering and Electronics》 SCIE CSCD 2024年第3期558-574,F0002,共18页
Existing specific emitter identification(SEI)methods based on hand-crafted features have drawbacks of losing feature information and involving multiple processing stages,which reduce the identification accuracy of emi... Existing specific emitter identification(SEI)methods based on hand-crafted features have drawbacks of losing feature information and involving multiple processing stages,which reduce the identification accuracy of emitters and complicate the procedures of identification.In this paper,we propose a deep SEI approach via multidimensional feature extraction for radio frequency fingerprints(RFFs),namely,RFFsNet-SEI.Particularly,we extract multidimensional physical RFFs from the received signal by virtue of variational mode decomposition(VMD)and Hilbert transform(HT).The physical RFFs and I-Q data are formed into the balanced-RFFs,which are then used to train RFFsNet-SEI.As introducing model-aided RFFs into neural network,the hybrid-driven scheme including physical features and I-Q data is constructed.It improves physical interpretability of RFFsNet-SEI.Meanwhile,since RFFsNet-SEI identifies individual of emitters from received raw data in end-to-end,it accelerates SEI implementation and simplifies procedures of identification.Moreover,as the temporal features and spectral features of the received signal are both extracted by RFFsNet-SEI,identification accuracy is improved.Finally,we compare RFFsNet-SEI with the counterparts in terms of identification accuracy,computational complexity,and prediction speed.Experimental results illustrate that the proposed method outperforms the counterparts on the basis of simulation dataset and real dataset collected in the anechoic chamber. 展开更多
关键词 specific emitter identification(SEI) deep learning(dl) radio frequency fingerprint(RFF) multidimensional feature extraction(MFE) variational mode decomposition(VMD)
在线阅读 下载PDF
Deep learning-based time-varying channel estimation with basis expansion model for MIMO-OFDM system 被引量:2
17
作者 HU Bo YANG Lihua +1 位作者 REN Lulu NIE Qian 《High Technology Letters》 EI CAS 2022年第3期288-294,共7页
For high-speed mobile MIMO-OFDM system,a low-complexity deep learning(DL) based timevarying channel estimation scheme is proposed.To reduce the number of estimated parameters,the basis expansion model(BEM) is employed... For high-speed mobile MIMO-OFDM system,a low-complexity deep learning(DL) based timevarying channel estimation scheme is proposed.To reduce the number of estimated parameters,the basis expansion model(BEM) is employed to model the time-varying channel,which converts the channel estimation into the estimation of the basis coefficient.Specifically,the initial basis coefficients are firstly used to train the neural network in an offline manner,and then the high-precision channel estimation can be obtained by small number of inputs.Moreover,the linear minimum mean square error(LMMSE) estimated channel is considered for the loss function in training phase,which makes the proposed method more practical.Simulation results show that the proposed method has a better performance and lower computational complexity compared with the available schemes,and it is robust to the fast time-varying channel in the high-speed mobile scenarios. 展开更多
关键词 MIMO-OFDM high-speed mobile time-varying channel deep learning(dl) basis expansion model(BEM)
在线阅读 下载PDF
A Hierarchy Distributed-Agents Model for Network Risk Evaluation Based on Deep Learning 被引量:1
18
作者 Jin Yang Tao Li +2 位作者 Gang Liang Wenbo He Yue Zhao 《Computer Modeling in Engineering & Sciences》 SCIE EI 2019年第7期1-23,共23页
Deep Learning presents a critical capability to be geared into environments being constantly changed and ongoing learning dynamic,which is especially relevant in Network Intrusion Detection.In this paper,as enlightene... Deep Learning presents a critical capability to be geared into environments being constantly changed and ongoing learning dynamic,which is especially relevant in Network Intrusion Detection.In this paper,as enlightened by the theory of Deep Learning Neural Networks,Hierarchy Distributed-Agents Model for Network Risk Evaluation,a newly developed model,is proposed.The architecture taken on by the distributed-agents model are given,as well as the approach of analyzing network intrusion detection using Deep Learning,the mechanism of sharing hyper-parameters to improve the efficiency of learning is presented,and the hierarchical evaluative framework for Network Risk Evaluation of the proposed model is built.Furthermore,to examine the proposed model,a series of experiments were conducted in terms of NSLKDD datasets.The proposed model was able to differentiate between normal and abnormal network activities with an accuracy of 97.60%on NSL-KDD datasets.As the results acquired from the experiment indicate,the model developed in this paper is characterized by high-speed and high-accuracy processing which shall offer a preferable solution with regard to the Risk Evaluation in Network. 展开更多
关键词 Network security deep learning(dl) INTRUSION detection system(IDS) DISTRIBUTED AGENTS
在线阅读 下载PDF
A Deep Learning-Based Continuous Blood Pressure Measurement by Dual Photoplethysmography Signals 被引量:1
19
作者 Chih-Ta Yen Sheng-Nan Chang +1 位作者 Liao Jia-Xian Yi-Kai Huang 《Computers, Materials & Continua》 SCIE EI 2022年第2期2937-2952,共16页
This study proposed a measurement platform for continuous blood pressure estimation based on dual photoplethysmography(PPG)sensors and a deep learning(DL)that can be used for continuous and rapid measurement of blood ... This study proposed a measurement platform for continuous blood pressure estimation based on dual photoplethysmography(PPG)sensors and a deep learning(DL)that can be used for continuous and rapid measurement of blood pressure and analysis of cardiovascular-related indicators.The proposed platform measured the signal changes in PPG and converted them into physiological indicators,such as pulse transit time(PTT),pulse wave velocity(PWV),perfusion index(PI)and heart rate(HR);these indicators were then fed into the DL to calculate blood pressure.The hardware of the experiment comprised 2 PPG components(i.e.,Raspberry Pi 3 Model B and analog-todigital converter[MCP3008]),which were connected using a serial peripheral interface.The DL algorithm converted the stable dual PPG signals acquired from the strictly standardized experimental process into various physiological indicators as input parameters and finally obtained the systolic blood pressure(SBP),diastolic blood pressure(DBP)and mean arterial pressure(MAP).To increase the robustness of the DL model,this study input data of 100 Asian participants into the training database,including those with and without cardiovascular disease,each with a proportion of approximately 50%.The experimental results revealed that the mean absolute error and standard deviation of SBP was 0.17±0.46 mmHg.The mean absolute error and standard deviation of DBP was 0.27±0.52 mmHg.The mean absolute error and standard deviation of MAP was 0.16±0.40 mmHg. 展开更多
关键词 deep learning(dl) blood pressure continuous non-invasive blood pressure measurement photoplethysmography(PGG)
在线阅读 下载PDF
Micro-mechanical damage diagnosis methodologies based on machine learning and deep learning models
20
作者 Shahab SHAMSIRBAND Nabi MEHRI KHANSARI 《Journal of Zhejiang University-Science A(Applied Physics & Engineering)》 SCIE EI CAS CSCD 2021年第8期585-608,共24页
A loss of integrity and the effects of damage on mechanical attributes result in macro/micro-mechanical failure,especially in composite structures.As a progressive degradation of material continuity,predictions for an... A loss of integrity and the effects of damage on mechanical attributes result in macro/micro-mechanical failure,especially in composite structures.As a progressive degradation of material continuity,predictions for any aspects of the initiation and propagation of damage need to be identified by a trustworthy mechanism to guarantee the safety of structures.Besides material design,structural integrity and health need to be monitored carefully.Among the most powerful methods for the detection of damage are machine learning(ML)and deep learning(DL).In this paper,we review state-of-the-art ML methods and their applications in detecting and predicting material damage,concentrating on composite materials.The more influential ML methods are identified based on their performance,and research gaps and future trends are discussed.Based on our findings,DL followed by ensemble-based techniques has the highest application and robustness in the field of damage diagnosis. 展开更多
关键词 Damage detection Machine learning(ML) Composite structure Micro-mechanics of damage deep learning(dl)
原文传递
上一页 1 2 21 下一页 到第
使用帮助 返回顶部