期刊文献+
共找到398篇文章
< 1 2 20 >
每页显示 20 50 100
Spectrum Prediction Based on GAN and Deep Transfer Learning:A Cross-Band Data Augmentation Framework 被引量:7
1
作者 Fandi Lin Jin Chen +3 位作者 Guoru Ding Yutao Jiao Jiachen Sun Haichao Wang 《China Communications》 SCIE CSCD 2021年第1期18-32,共15页
This paper investigates the problem of data scarcity in spectrum prediction.A cognitive radio equipment may frequently switch the target frequency as the electromagnetic environment changes.The previously trained mode... This paper investigates the problem of data scarcity in spectrum prediction.A cognitive radio equipment may frequently switch the target frequency as the electromagnetic environment changes.The previously trained model for prediction often cannot maintain a good performance when facing small amount of historical data of the new target frequency.Moreover,the cognitive radio equipment usually implements the dynamic spectrum access in real time which means the time to recollect the data of the new task frequency band and retrain the model is very limited.To address the above issues,we develop a crossband data augmentation framework for spectrum prediction by leveraging the recent advances of generative adversarial network(GAN)and deep transfer learning.Firstly,through the similarity measurement,we pre-train a GAN model using the historical data of the frequency band that is the most similar to the target frequency band.Then,through the data augmentation by feeding the small amount of the target data into the pre-trained GAN,temporal-spectral residual network is further trained using deep transfer learning and the generated data with high similarity from GAN.Finally,experiment results demonstrate the effectiveness of the proposed framework. 展开更多
关键词 cognitive radio cross-band spectrum prediction deep transfer learning generative adversarial network cross-band data augmentation framework
在线阅读 下载PDF
vip Editorial Special Issue on the Next-Generation Deep Learning Approaches to Emerging Real-World Applications
2
作者 Yu Zhou Eneko Osaba Xiao Zhang 《Computers, Materials & Continua》 2025年第7期237-242,共6页
Introduction Deep learning(DL),as one of the most transformative technologies in artificial intelligence(AI),is undergoing a pivotal transition from laboratory research to industrial deployment.Advancing at an unprece... Introduction Deep learning(DL),as one of the most transformative technologies in artificial intelligence(AI),is undergoing a pivotal transition from laboratory research to industrial deployment.Advancing at an unprecedented pace,DL is transcending theoretical and application boundaries to penetrate emerging realworld scenarios such as industrial automation,urban management,and health monitoring,thereby driving a new wave of intelligent transformation.In August 2023,Goldman Sachs estimated that global AI investment will reach US$200 billion by 2025[1].However,the increasing complexity and dynamic nature of application scenarios expose critical challenges in traditional deep learning,including data heterogeneity,insufficient model generalization,computational resource constraints,and privacy-security trade-offs.The next generation of deep learning methodologies needs to achieve breakthroughs in multimodal fusion,lightweight design,interpretability enhancement,and cross-disciplinary collaborative optimization,in order to develop more efficient,robust,and practically valuable intelligent systems. 展开更多
关键词 health monitoringthereby deep learning industrial deployment intelligent transformationin deep learning dl artificial intelligence ai penetrate emerging realworld scenarios transformative technologies
在线阅读 下载PDF
A critical evaluation of deep-learning based phylogenetic inference programs using simulated datasets
3
作者 Yixiao Zhu Yonglin Li +2 位作者 Chuhao Li Xing-Xing Shen Xiaofan Zhou 《Journal of Genetics and Genomics》 2025年第5期714-717,共4页
Inferring phylogenetic trees from molecular sequences is a cornerstone of evolutionary biology.Many standard phylogenetic methods(such as maximum-likelihood[ML])rely on explicit models of sequence evolution and thus o... Inferring phylogenetic trees from molecular sequences is a cornerstone of evolutionary biology.Many standard phylogenetic methods(such as maximum-likelihood[ML])rely on explicit models of sequence evolution and thus often suffer from model misspecification or inadequacy.The on-rising deep learning(DL)techniques offer a powerful alternative.Deep learning employs multi-layered artificial neural networks to progressively transform input data into more abstract and complex representations.DL methods can autonomously uncover meaningful patterns from data,thereby bypassing potential biases introduced by predefined features(Franklin,2005;Murphy,2012).Recent efforts have aimed to apply deep neural networks(DNNs)to phylogenetics,with a growing number of applications in tree reconstruction(Suvorov et al.,2020;Zou et al.,2020;Nesterenko et al.,2022;Smith and Hahn,2023;Wang et al.,2023),substitution model selection(Abadi et al.,2020;Burgstaller-Muehlbacher et al.,2023),and diversification rate inference(Voznica et al.,2022;Lajaaiti et al.,2023;Lambert et al.,2023).In phylogenetic tree reconstruction,PhyDL(Zou et al.,2020)and Tree_learning(Suvorov et al.,2020)are two notable DNN-based programs designed to infer unrooted quartet trees directly from alignments of four amino acid(AA)and DNA sequences,respectively. 展开更多
关键词 phylogenetic inference explicit models sequence evolution deep learning deep learning dl techniques molecular sequences simulated datasets phylogenetic methods such evolutionary biologymany
原文传递
A deep learning model for ocean surface latent heat flux based on transformer and data assimilation
4
作者 Yahui Liu Hengxiao Li Jichao Wang 《Acta Oceanologica Sinica》 2025年第5期115-130,共16页
Efficient and accurate prediction of ocean surface latent heat fluxes is essential for understanding and modeling climate dynamics.Conventional estimation methods have low resolution and lack accuracy.The transformer ... Efficient and accurate prediction of ocean surface latent heat fluxes is essential for understanding and modeling climate dynamics.Conventional estimation methods have low resolution and lack accuracy.The transformer model,with its self-attention mechanism,effectively captures long-range dependencies,leading to a degradation of accuracy over time.Due to the non-linearity and uncertainty of physical processes,the transformer model encounters the problem of error accumulation,leading to a degradation of accuracy over time.To solve this problem,we combine the Data Assimilation(DA)technique with the transformer model and continuously modify the model state to make it closer to the actual observations.In this paper,we propose a deep learning model called TransNetDA,which integrates transformer,convolutional neural network and DA methods.By combining data-driven and DA methods for spatiotemporal prediction,TransNetDA effectively extracts multi-scale spatial features and significantly improves prediction accuracy.The experimental results indicate that the TransNetDA method surpasses traditional techniques in terms of root mean square error and R2 metrics,showcasing its superior performance in predicting latent heat fluxes at the ocean surface. 展开更多
关键词 climate dynamics deep learning(dl) Data Assimilation(DA) TRANSFORMER ensemble Kalman filter ocean surface latent heat flux
在线阅读 下载PDF
Deep Learning Models for Detecting Cheating in Online Exams
5
作者 Siham Essahraui Ismail Lamaakal +6 位作者 Yassine Maleh Khalid El Makkaoui Mouncef Filali Bouami Ibrahim Ouahbi May Almousa Ali Abdullah S.Al Qahtani Ahmed A.Abd El-Latif 《Computers, Materials & Continua》 2025年第11期3151-3183,共33页
The rapid shift to online education has introduced significant challenges to maintaining academic integrity in remote assessments,as traditional proctoring methods fall short in preventing cheating.The increase in che... The rapid shift to online education has introduced significant challenges to maintaining academic integrity in remote assessments,as traditional proctoring methods fall short in preventing cheating.The increase in cheating during online exams highlights the need for efficient,adaptable detection models to uphold academic credibility.This paper presents a comprehensive analysis of various deep learning models for cheating detection in online proctoring systems,evaluating their accuracy,efficiency,and adaptability.We benchmark several advanced architectures,including EfficientNet,MobileNetV2,ResNet variants and more,using two specialized datasets(OEP and OP)tailored for online proctoring contexts.Our findings reveal that EfficientNetB1 and YOLOv5 achieve top performance on the OP dataset,with EfficientNetB1 attaining a peak accuracy of 94.59% and YOLOv5 reaching a mean average precision(mAP@0.5)of 98.3%.For the OEP dataset,ResNet50-CBAM,YOLOv5 and EfficientNetB0 stand out,with ResNet50-CBAMachieving an accuracy of 93.61% and EfficientNetB0 showing robust detection performance with balanced accuracy and computational efficiency.These results underscore the importance of selectingmodels that balance accuracy and efficiency,supporting scalable,effective cheating detection in online assessments. 展开更多
关键词 Anti-cheating model computer vision(CV) deep learning(dl) online exam proctoring neural networks facial recognition biometric authentication security of distance education
在线阅读 下载PDF
Enhancing User Experience in AI-Powered Human-Computer Communication with Vocal Emotions Identification Using a Novel Deep Learning Method
6
作者 Ahmed Alhussen Arshiya Sajid Ansari Mohammad Sajid Mohammadi 《Computers, Materials & Continua》 2025年第2期2909-2929,共21页
Voice, motion, and mimicry are naturalistic control modalities that have replaced text or display-driven control in human-computer communication (HCC). Specifically, the vocals contain a lot of knowledge, revealing de... Voice, motion, and mimicry are naturalistic control modalities that have replaced text or display-driven control in human-computer communication (HCC). Specifically, the vocals contain a lot of knowledge, revealing details about the speaker’s goals and desires, as well as their internal condition. Certain vocal characteristics reveal the speaker’s mood, intention, and motivation, while word study assists the speaker’s demand to be understood. Voice emotion recognition has become an essential component of modern HCC networks. Integrating findings from the various disciplines involved in identifying vocal emotions is also challenging. Many sound analysis techniques were developed in the past. Learning about the development of artificial intelligence (AI), and especially Deep Learning (DL) technology, research incorporating real data is becoming increasingly common these days. Thus, this research presents a novel selfish herd optimization-tuned long/short-term memory (SHO-LSTM) strategy to identify vocal emotions in human communication. The RAVDESS public dataset is used to train the suggested SHO-LSTM technique. Mel-frequency cepstral coefficient (MFCC) and wiener filter (WF) techniques are used, respectively, to remove noise and extract features from the data. LSTM and SHO are applied to the extracted data to optimize the LSTM network’s parameters for effective emotion recognition. Python Software was used to execute our proposed framework. In the finding assessment phase, Numerous metrics are used to evaluate the proposed model’s detection capability, Such as F1-score (95%), precision (95%), recall (96%), and accuracy (97%). The suggested approach is tested on a Python platform, and the SHO-LSTM’s outcomes are contrasted with those of other previously conducted research. Based on comparative assessments, our suggested approach outperforms the current approaches in vocal emotion recognition. 展开更多
关键词 Human-computer communication(HCC) vocal emotions live vocal artificial intelligence(AI) deep learning(dl) selfish herd optimization-tuned long/short K term memory(SHO-LSTM)
在线阅读 下载PDF
Forecasting hourly PM_(2.5)concentrations based on decomposition-ensemble-reconstruction framework incorporating deep learning algorithms
7
作者 Peilei Cai Chengyuan Zhang Jian Chai 《Data Science and Management》 2023年第1期46-54,共9页
Accurate predictions of hourly PM_(2.5)concentrations are crucial for preventing the harmful effects of air pollution.In this study,a new decomposition-ensemble framework incorporating the variational mode decompositi... Accurate predictions of hourly PM_(2.5)concentrations are crucial for preventing the harmful effects of air pollution.In this study,a new decomposition-ensemble framework incorporating the variational mode decomposition method(VMD),econometric forecasting method(autoregressive integrated moving average model,ARIMA),and deep learning techniques(convolutional neural networks(CNN)and temporal convolutional network(TCN))was developed to model the data characteristics of hourly PM_(2.5)concentrations.Taking the PM_(2.5)concentration of Lanzhou,Gansu Province,China as the sample,the empirical results demonstrated that the developed decomposition-ensemble framework is significantly superior to the benchmarks with the econometric model,machine learning models,basic deep learning models,and traditional decomposition-ensemble models,within one-,two-,or three-step-ahead.This study verified the effectiveness of the new prediction framework to capture the data patterns of PM_(2.5)concentration and can be employed as a meaningful PM_(2.5)concentrations prediction tool. 展开更多
关键词 PM_(2.5)concentration prediction Decomposition-ensemble-reconstruction framework Variational mode decomposition method deep learning
在线阅读 下载PDF
Recent Progresses in Deep Learning Based Acoustic Models 被引量:11
8
作者 Dong Yu Jinyu Li 《IEEE/CAA Journal of Automatica Sinica》 SCIE EI CSCD 2017年第3期396-409,共14页
In this paper,we summarize recent progresses made in deep learning based acoustic models and the motivation and insights behind the surveyed techniques.We first discuss models such as recurrent neural networks(RNNs) a... In this paper,we summarize recent progresses made in deep learning based acoustic models and the motivation and insights behind the surveyed techniques.We first discuss models such as recurrent neural networks(RNNs) and convolutional neural networks(CNNs) that can effectively exploit variablelength contextual information,and their various combination with other models.We then describe models that are optimized end-to-end and emphasize on feature representations learned jointly with the rest of the system,the connectionist temporal classification(CTC) criterion,and the attention-based sequenceto-sequence translation model.We further illustrate robustness issues in speech recognition systems,and discuss acoustic model adaptation,speech enhancement and separation,and robust training strategies.We also cover modeling techniques that lead to more efficient decoding and discuss possible future directions in acoustic model research. 展开更多
关键词 Attention model convolutional neural network(CNN) connectionist temporal classification(CTC) deep learning(dl) long short-term memory(LSTM) permutation invariant training speech adaptation speech processing speech recognition speech separation
在线阅读 下载PDF
Optimizing Deep Learning Parameters Using Genetic Algorithm for Object Recognition and Robot Grasping 被引量:2
9
作者 Delowar Hossain Genci Capi Mitsuru Jindai 《Journal of Electronic Science and Technology》 CAS CSCD 2018年第1期11-15,共5页
The performance of deep learning(DL)networks has been increased by elaborating the network structures. However, the DL netowrks have many parameters, which have a lot of influence on the performance of the network. We... The performance of deep learning(DL)networks has been increased by elaborating the network structures. However, the DL netowrks have many parameters, which have a lot of influence on the performance of the network. We propose a genetic algorithm(GA) based deep belief neural network(DBNN) method for robot object recognition and grasping purpose. This method optimizes the parameters of the DBNN method, such as the number of hidden units, the number of epochs, and the learning rates, which would reduce the error rate and the network training time of object recognition. After recognizing objects, the robot performs the pick-andplace operations. We build a database of six objects for experimental purpose. Experimental results demonstrate that our method outperforms on the optimized robot object recognition and grasping tasks. 展开更多
关键词 deep learning(dl) deep belief neural network(DBNN) genetic algorithm(GA) object recognition robot grasping
在线阅读 下载PDF
Identifi cation of paralytic shellfi sh toxin-producing microalgae using machine learning and deep learning methods 被引量:3
10
作者 Wei XU Jie NIU +4 位作者 Wenyu GAN Siyu GOU Shuai ZHANG Han QIU Tianjiu JIANG 《Journal of Oceanology and Limnology》 SCIE CAS CSCD 2022年第6期2202-2217,共16页
Paralytic shellfi sh poisoning(PSP)microalgae,as one of the harmful algal blooms,causes great damage to the of fshore fi shery,marine culture,and marine ecological environment.At present,there is no technique for real... Paralytic shellfi sh poisoning(PSP)microalgae,as one of the harmful algal blooms,causes great damage to the of fshore fi shery,marine culture,and marine ecological environment.At present,there is no technique for real-time accurate identifi cation of toxic microalgae,by combining three-dimensional fluorescence with machine learning(ML)and deep learning(DL),we developed methods to classify the PSP and non-PSP microalgae.The average classifi cation accuracies of these two methods for microalgae are above 90%,and the accuracies for discriminating 12 microalgae species in PSP and non-PSP microalgae are above 94%.When the emission wavelength is 650-690 nm,the fl uorescence characteristics bands(excitation wavelength)occur dif ferently at 410-480 nm and 500-560 nm for PSP and non-PSP microalgae,respectively.The identification accuracies of ML models(support vector machine(SVM),and k-nearest neighbor rule(k-NN)),and DL model(convolutional neural network(CNN))to PSP microalgae are 96.25%,96.36%,and 95.88%respectively,indicating that ML and DL are suitable for the classifi cation of toxic microalgae. 展开更多
关键词 paralytic shellfi sh poisoning(PSP) machine learning(ML) deep learning(dl) toxic algal classifi cation
在线阅读 下载PDF
Deep learning for fast channel estimation in millimeter-wave MIMO systems 被引量:3
11
作者 LYU Siting LI Xiaohui +2 位作者 FAN Tao LIU Jiawen SHI Mingli 《Journal of Systems Engineering and Electronics》 SCIE EI CSCD 2022年第6期1088-1095,共8页
Channel estimation has been considered as a key issue in the millimeter-wave(mmWave)massive multi-input multioutput(MIMO)communication systems,which becomes more challenging with a large number of antennas.In this pap... Channel estimation has been considered as a key issue in the millimeter-wave(mmWave)massive multi-input multioutput(MIMO)communication systems,which becomes more challenging with a large number of antennas.In this paper,we propose a deep learning(DL)-based fast channel estimation method for mmWave massive MIMO systems.The proposed method can directly and effectively estimate channel state information(CSI)from received data without performing pilot signals estimate in advance,which simplifies the estimation process.Specifically,we develop a convolutional neural network(CNN)-based channel estimation network for the case of dimensional mismatch of input and output data,subsequently denoted as channel(H)neural network(HNN).It can quickly estimate the channel information by learning the inherent characteristics of the received data and the relationship between the received data and the channel,while the dimension of the received data is much smaller than the channel matrix.Simulation results show that the proposed HNN can gain better channel estimation accuracy compared with existing schemes. 展开更多
关键词 millimeter-wave(mmWave) channel estimation deep learning(dl) dimensional mismatch channel state information(CSI)
在线阅读 下载PDF
Deep learning-based time-varying channel estimation with basis expansion model for MIMO-OFDM system 被引量:2
12
作者 HU Bo YANG Lihua +1 位作者 REN Lulu NIE Qian 《High Technology Letters》 EI CAS 2022年第3期288-294,共7页
For high-speed mobile MIMO-OFDM system,a low-complexity deep learning(DL) based timevarying channel estimation scheme is proposed.To reduce the number of estimated parameters,the basis expansion model(BEM) is employed... For high-speed mobile MIMO-OFDM system,a low-complexity deep learning(DL) based timevarying channel estimation scheme is proposed.To reduce the number of estimated parameters,the basis expansion model(BEM) is employed to model the time-varying channel,which converts the channel estimation into the estimation of the basis coefficient.Specifically,the initial basis coefficients are firstly used to train the neural network in an offline manner,and then the high-precision channel estimation can be obtained by small number of inputs.Moreover,the linear minimum mean square error(LMMSE) estimated channel is considered for the loss function in training phase,which makes the proposed method more practical.Simulation results show that the proposed method has a better performance and lower computational complexity compared with the available schemes,and it is robust to the fast time-varying channel in the high-speed mobile scenarios. 展开更多
关键词 MIMO-OFDM high-speed mobile time-varying channel deep learning(dl) basis expansion model(BEM)
在线阅读 下载PDF
A Hierarchy Distributed-Agents Model for Network Risk Evaluation Based on Deep Learning 被引量:1
13
作者 Jin Yang Tao Li +2 位作者 Gang Liang Wenbo He Yue Zhao 《Computer Modeling in Engineering & Sciences》 SCIE EI 2019年第7期1-23,共23页
Deep Learning presents a critical capability to be geared into environments being constantly changed and ongoing learning dynamic,which is especially relevant in Network Intrusion Detection.In this paper,as enlightene... Deep Learning presents a critical capability to be geared into environments being constantly changed and ongoing learning dynamic,which is especially relevant in Network Intrusion Detection.In this paper,as enlightened by the theory of Deep Learning Neural Networks,Hierarchy Distributed-Agents Model for Network Risk Evaluation,a newly developed model,is proposed.The architecture taken on by the distributed-agents model are given,as well as the approach of analyzing network intrusion detection using Deep Learning,the mechanism of sharing hyper-parameters to improve the efficiency of learning is presented,and the hierarchical evaluative framework for Network Risk Evaluation of the proposed model is built.Furthermore,to examine the proposed model,a series of experiments were conducted in terms of NSLKDD datasets.The proposed model was able to differentiate between normal and abnormal network activities with an accuracy of 97.60%on NSL-KDD datasets.As the results acquired from the experiment indicate,the model developed in this paper is characterized by high-speed and high-accuracy processing which shall offer a preferable solution with regard to the Risk Evaluation in Network. 展开更多
关键词 Network security deep learning(dl) INTRUSION detection system(IDS) DISTRIBUTED AGENTS
在线阅读 下载PDF
RFFsNet-SEI:a multidimensional balanced-RFFs deep neural network framework for specific emitter identification 被引量:2
14
作者 FAN Rong SI Chengke +1 位作者 HAN Yi WAN Qun 《Journal of Systems Engineering and Electronics》 SCIE CSCD 2024年第3期558-574,F0002,共18页
Existing specific emitter identification(SEI)methods based on hand-crafted features have drawbacks of losing feature information and involving multiple processing stages,which reduce the identification accuracy of emi... Existing specific emitter identification(SEI)methods based on hand-crafted features have drawbacks of losing feature information and involving multiple processing stages,which reduce the identification accuracy of emitters and complicate the procedures of identification.In this paper,we propose a deep SEI approach via multidimensional feature extraction for radio frequency fingerprints(RFFs),namely,RFFsNet-SEI.Particularly,we extract multidimensional physical RFFs from the received signal by virtue of variational mode decomposition(VMD)and Hilbert transform(HT).The physical RFFs and I-Q data are formed into the balanced-RFFs,which are then used to train RFFsNet-SEI.As introducing model-aided RFFs into neural network,the hybrid-driven scheme including physical features and I-Q data is constructed.It improves physical interpretability of RFFsNet-SEI.Meanwhile,since RFFsNet-SEI identifies individual of emitters from received raw data in end-to-end,it accelerates SEI implementation and simplifies procedures of identification.Moreover,as the temporal features and spectral features of the received signal are both extracted by RFFsNet-SEI,identification accuracy is improved.Finally,we compare RFFsNet-SEI with the counterparts in terms of identification accuracy,computational complexity,and prediction speed.Experimental results illustrate that the proposed method outperforms the counterparts on the basis of simulation dataset and real dataset collected in the anechoic chamber. 展开更多
关键词 specific emitter identification(SEI) deep learning(dl) radio frequency fingerprint(RFF) multidimensional feature extraction(MFE) variational mode decomposition(VMD)
在线阅读 下载PDF
A Deep Learning-Based Continuous Blood Pressure Measurement by Dual Photoplethysmography Signals 被引量:1
15
作者 Chih-Ta Yen Sheng-Nan Chang +1 位作者 Liao Jia-Xian Yi-Kai Huang 《Computers, Materials & Continua》 SCIE EI 2022年第2期2937-2952,共16页
This study proposed a measurement platform for continuous blood pressure estimation based on dual photoplethysmography(PPG)sensors and a deep learning(DL)that can be used for continuous and rapid measurement of blood ... This study proposed a measurement platform for continuous blood pressure estimation based on dual photoplethysmography(PPG)sensors and a deep learning(DL)that can be used for continuous and rapid measurement of blood pressure and analysis of cardiovascular-related indicators.The proposed platform measured the signal changes in PPG and converted them into physiological indicators,such as pulse transit time(PTT),pulse wave velocity(PWV),perfusion index(PI)and heart rate(HR);these indicators were then fed into the DL to calculate blood pressure.The hardware of the experiment comprised 2 PPG components(i.e.,Raspberry Pi 3 Model B and analog-todigital converter[MCP3008]),which were connected using a serial peripheral interface.The DL algorithm converted the stable dual PPG signals acquired from the strictly standardized experimental process into various physiological indicators as input parameters and finally obtained the systolic blood pressure(SBP),diastolic blood pressure(DBP)and mean arterial pressure(MAP).To increase the robustness of the DL model,this study input data of 100 Asian participants into the training database,including those with and without cardiovascular disease,each with a proportion of approximately 50%.The experimental results revealed that the mean absolute error and standard deviation of SBP was 0.17±0.46 mmHg.The mean absolute error and standard deviation of DBP was 0.27±0.52 mmHg.The mean absolute error and standard deviation of MAP was 0.16±0.40 mmHg. 展开更多
关键词 deep learning(dl) blood pressure continuous non-invasive blood pressure measurement photoplethysmography(PGG)
在线阅读 下载PDF
Micro-mechanical damage diagnosis methodologies based on machine learning and deep learning models
16
作者 Shahab SHAMSIRBAND Nabi MEHRI KHANSARI 《Journal of Zhejiang University-Science A(Applied Physics & Engineering)》 SCIE EI CAS CSCD 2021年第8期585-608,共24页
A loss of integrity and the effects of damage on mechanical attributes result in macro/micro-mechanical failure,especially in composite structures.As a progressive degradation of material continuity,predictions for an... A loss of integrity and the effects of damage on mechanical attributes result in macro/micro-mechanical failure,especially in composite structures.As a progressive degradation of material continuity,predictions for any aspects of the initiation and propagation of damage need to be identified by a trustworthy mechanism to guarantee the safety of structures.Besides material design,structural integrity and health need to be monitored carefully.Among the most powerful methods for the detection of damage are machine learning(ML)and deep learning(DL).In this paper,we review state-of-the-art ML methods and their applications in detecting and predicting material damage,concentrating on composite materials.The more influential ML methods are identified based on their performance,and research gaps and future trends are discussed.Based on our findings,DL followed by ensemble-based techniques has the highest application and robustness in the field of damage diagnosis. 展开更多
关键词 Damage detection Machine learning(ML) Composite structure Micro-mechanics of damage deep learning(dl)
原文传递
Tuning-up Learning Parameters for Deep Convolutional Neural Network:A Case Study for Hand-Drawn Sketch Images
17
作者 Shaukat Hayat Kun She +2 位作者 Muhammad Mateen Parinya Suwansrikham Muhammad Abdullah Ahmed Alghaili 《Journal of Electronic Science and Technology》 CAS CSCD 2022年第3期305-318,共14页
Several recent successes in deep learning(DL),such as state-of-the-art performance on several image classification benchmarks,have been achieved through the improved configuration.Hyperparameters(HPs)tuning is a key f... Several recent successes in deep learning(DL),such as state-of-the-art performance on several image classification benchmarks,have been achieved through the improved configuration.Hyperparameters(HPs)tuning is a key factor affecting the performance of machine learning(ML)algorithms.Various state-of-the-art DL models use different HPs in different ways for classification tasks on different datasets.This manuscript provides a brief overview of learning parameters and configuration techniques to show the benefits of using a large-scale handdrawn sketch dataset for classification problems.We analyzed the impact of different learning parameters and toplayer configurations with batch normalization(BN)and dropouts on the performance of the pre-trained visual geometry group 19(VGG-19).The analyzed learning parameters include different learning rates and momentum values of two different optimizers,such as stochastic gradient descent(SGD)and Adam.Our analysis demonstrates that using the SGD optimizer and learning parameters,such as small learning rates with high values of momentum,along with both BN and dropouts in top layers,has a good impact on the sketch image classification accuracy. 展开更多
关键词 deep learning(dl) hand-drawn sketches learning parameters
在线阅读 下载PDF
Deep Learning ResNet101 Deep Features of Portable Chest X-Ray Accurately Classify COVID-19 Lung Infection
18
作者 Sobia Nawaz Sidra Rasheed +5 位作者 Wania Sami Lal Hussain Amjad Aldweesh Elsayed Tag eldin Umair Ahmad Salaria Mohammad Shahbaz Khan 《Computers, Materials & Continua》 SCIE EI 2023年第6期5213-5228,共16页
This study is designed to develop Artificial Intelligence(AI)based analysis tool that could accurately detect COVID-19 lung infections based on portable chest x-rays(CXRs).The frontline physicians and radiologists suf... This study is designed to develop Artificial Intelligence(AI)based analysis tool that could accurately detect COVID-19 lung infections based on portable chest x-rays(CXRs).The frontline physicians and radiologists suffer from grand challenges for COVID-19 pandemic due to the suboptimal image quality and the large volume of CXRs.In this study,AI-based analysis tools were developed that can precisely classify COVID-19 lung infection.Publicly available datasets of COVID-19(N=1525),non-COVID-19 normal(N=1525),viral pneumonia(N=1342)and bacterial pneumonia(N=2521)from the Italian Society of Medical and Interventional Radiology(SIRM),Radiopaedia,The Cancer Imaging Archive(TCIA)and Kaggle repositories were taken.A multi-approach utilizing deep learning ResNet101 with and without hyperparameters optimization was employed.Additionally,the fea-tures extracted from the average pooling layer of ResNet101 were used as input to machine learning(ML)algorithms,which twice trained the learning algorithms.The ResNet101 with optimized parameters yielded improved performance to default parameters.The extracted features from ResNet101 are fed to the k-nearest neighbor(KNN)and support vector machine(SVM)yielded the highest 3-class classification performance of 99.86%and 99.46%,respectively.The results indicate that the proposed approach can be bet-ter utilized for improving the accuracy and diagnostic efficiency of CXRs.The proposed deep learning model has the potential to improve further the efficiency of the healthcare systems for proper diagnosis and prognosis of COVID-19 lung infection. 展开更多
关键词 COVID-19 deep learning(dl) lung infection convolutional neural network(CNN)
在线阅读 下载PDF
Cryptographic Based Secure Model on Dataset for Deep Learning Algorithms
19
作者 Muhammad Tayyab Mohsen Marjani +3 位作者 N.Z.Jhanjhi Ibrahim Abaker Targio Hashim Abdulwahab Ali Almazroi Abdulaleem Ali Almazroi 《Computers, Materials & Continua》 SCIE EI 2021年第10期1183-1200,共18页
Deep learning(DL)algorithms have been widely used in various security applications to enhance the performances of decision-based models.Malicious data added by an attacker can cause several security and privacy proble... Deep learning(DL)algorithms have been widely used in various security applications to enhance the performances of decision-based models.Malicious data added by an attacker can cause several security and privacy problems in the operation of DL models.The two most common active attacks are poisoning and evasion attacks,which can cause various problems,including wrong prediction and misclassification of decision-based models.Therefore,to design an efficient DL model,it is crucial to mitigate these attacks.In this regard,this study proposes a secure neural network(NN)model that provides data security during model training and testing phases.The main idea is to use cryptographic functions,such as hash function(SHA512)and homomorphic encryption(HE)scheme,to provide authenticity,integrity,and confidentiality of data.The performance of the proposed model is evaluated by experiments based on accuracy,precision,attack detection rate(ADR),and computational cost.The results show that the proposed model has achieved an accuracy of 98%,a precision of 0.97,and an ADR of 98%,even for a large number of attacks.Hence,the proposed model can be used to detect attacks and mitigate the attacker motives.The results also show that the computational cost of the proposed model does not increase with model complexity. 展开更多
关键词 deep learning(dl) poisoning attacks evasion attacks neural network hash functions SHA512 homomorphic encryption scheme
在线阅读 下载PDF
Spectrum Sensing Using Optimized Deep Learning Techniquesin Reconfigurable Embedded Systems
20
作者 Priyesh Kumar PonniyinSelvan 《Intelligent Automation & Soft Computing》 SCIE 2023年第5期2041-2054,共14页
The exponential growth of Internet of Things(IoT)and 5G networks has resulted in maximum users,and the role of cognitive radio has become pivotal in handling the crowded users.In this scenario,cognitive radio techniqu... The exponential growth of Internet of Things(IoT)and 5G networks has resulted in maximum users,and the role of cognitive radio has become pivotal in handling the crowded users.In this scenario,cognitive radio techniques such as spectrum sensing,spectrum sharing and dynamic spectrum access will become essential components in Wireless IoT communication.IoT devices must learn adaptively to the environment and extract the spectrum knowledge and inferred spectrum knowledge by appropriately changing communication parameters such as modulation index,frequency bands,coding rate etc.,to accommodate the above characteristics.Implementing the above learning methods on the embedded chip leads to high latency,high power consumption and more chip area utilisation.To overcome the problems mentioned above,we present DEEP HOLE Radio sys-tems,the intelligent system enabling the spectrum knowledge extraction from the unprocessed samples by the optimized deep learning models directly from the Radio Frequency(RF)environment.DEEP HOLE Radio provides(i)an opti-mized deep learning framework with a good trade-off between latency,power and utilization.(ii)Complete Hardware-Software architecture where the SoC’s coupled with radio transceivers for maximum performance.The experimentation has been carried out using GNURADIO software interfaced with Zynq-7000 devices mounting on ESP8266 radio transceivers with inbuilt Omni direc-tional antennas.The whole spectrum of knowledge has been extracted using GNU radio.These extracted features are used to train the proposed optimized deep learning models,which run parallel on Zynq-SoC 7000,consuming less area,power,latency and less utilization area.The proposed framework has been evaluated and compared with the existing frameworks such as RFLearn,Long Term Short Memory(LSTM),Convolutional Neural Networks(CNN)and Deep Neural Networks(DNN).The outcome shows that the proposed framework has outperformed the existing framework regarding the area,power and time.More-over,the experimental results show that the proposed framework decreases the delay,power and area by 15%,20%25%concerning the existing RFlearn and other hardware constraint frameworks. 展开更多
关键词 Internet of things cognitive radio spectrum sharing optimized deep learning framework GNU radio RF learn
在线阅读 下载PDF
上一页 1 2 20 下一页 到第
使用帮助 返回顶部