期刊文献+
共找到1,180篇文章
< 1 2 59 >
每页显示 20 50 100
The Blockchain Neural Network Superior to Deep Learning for Improving the Trust of Supply Chain
1
作者 Hsiao-Chun Han Der-Chen Huang 《Computer Modeling in Engineering & Sciences》 2025年第6期3921-3941,共21页
With the increasing importance of supply chain transparency,blockchain-based data has emerged as a valuable and verifiable source for analyzing procurement transaction risks.This study extends the mathematical model a... With the increasing importance of supply chain transparency,blockchain-based data has emerged as a valuable and verifiable source for analyzing procurement transaction risks.This study extends the mathematical model and proof of‘the Overall Performance Characteristics of the Supply Chain’to encompass multiple variables within blockchain data.Utilizing graph theory,the model is further developed into a single-layer neural network,which serves as the foundation for constructing two multi-layer deep learning neural network models,Feedforward Neural Network(abbreviated as FNN)and Deep Clustering Network(abbreviated as DCN).Furthermore,this study retrieves corporate data from the Chunghwa Yellow Pages online resource and Taiwan Economic Journal database(abbreviated as TEJ).These data are then virtualized using‘the Metaverse Algorithm’,and the selected virtualized blockchain variables are utilized to train a neural network model for classification.The results demonstrate that a single-layer neural network model,leveraging blockchain data and employing the Proof of Relation algorithm(abbreviated as PoR)as the activation function,effectively identifies anomalous enterprises,which constitute 7.2%of the total sample,aligning with expectations.In contrast,the multi-layer neural network models,DCN and FNN,classify an excessively large proportion of enterprises as anomalous(ranging from one-fourth to one-third),which deviates from expectations.This indicates that deep learning may still be inadequate in effectively capturing or identifying malicious corporate behaviors associated with distortions in procurement transaction data.In other words,procurement transaction blockchain data possesses intrinsic value that cannot be replaced by artificial intelligence(abbreviated as AI). 展开更多
关键词 Blockchain neural network deep learning consensus algorithm supply chain management information security management
在线阅读 下载PDF
Effective Controller Placement in Software-Defined Internet-of-Things Leveraging Deep Q-Learning (DQL)
2
作者 Jehad Ali Mohammed J.F.Alenazi 《Computers, Materials & Continua》 SCIE EI 2024年第12期4015-4032,共18页
The controller is a main component in the Software-Defined Networking(SDN)framework,which plays a significant role in enabling programmability and orchestration for 5G and next-generation networks.In SDN,frequent comm... The controller is a main component in the Software-Defined Networking(SDN)framework,which plays a significant role in enabling programmability and orchestration for 5G and next-generation networks.In SDN,frequent communication occurs between network switches and the controller,which manages and directs traffic flows.If the controller is not strategically placed within the network,this communication can experience increased delays,negatively affecting network performance.Specifically,an improperly placed controller can lead to higher end-to-end(E2E)delay,as switches must traverse more hops or encounter greater propagation delays when communicating with the controller.This paper introduces a novel approach using Deep Q-Learning(DQL)to dynamically place controllers in Software-Defined Internet of Things(SD-IoT)environments,with the goal of minimizing E2E delay between switches and controllers.E2E delay,a crucial metric for network performance,is influenced by two key factors:hop count,which measures the number of network nodes data must traverse,and propagation delay,which accounts for the physical distance between nodes.Our approach models the controller placement problem as a Markov Decision Process(MDP).In this model,the network configuration at any given time is represented as a“state,”while“actions”correspond to potential decisions regarding the placement of controllers or the reassignment of switches to controllers.Using a Deep Q-Network(DQN)to approximate the Q-function,the system learns the optimal controller placement by maximizing the cumulative reward,which is defined as the negative of the E2E delay.Essentially,the lower the delay,the higher the reward the system receives,enabling it to continuously improve its controller placement strategy.The experimental results show that our DQL-based method significantly reduces E2E delay when compared to traditional benchmark placement strategies.By dynamically learning from the network’s real-time conditions,the proposed method ensures that controller placement remains efficient and responsive,reducing communication delays and enhancing overall network performance. 展开更多
关键词 Software-defined networking deep q-learning controller placement quality of service
在线阅读 下载PDF
A low-complexity AMP detection algorithm with deep neural network for massive mimo systems
3
作者 Zufan Zhang Yang Li +1 位作者 Xiaoqin Yan Zonghua Ouyang 《Digital Communications and Networks》 CSCD 2024年第5期1375-1386,共12页
Signal detection plays an essential role in massive Multiple-Input Multiple-Output(MIMO)systems.However,existing detection methods have not yet made a good tradeoff between Bit Error Rate(BER)and computational complex... Signal detection plays an essential role in massive Multiple-Input Multiple-Output(MIMO)systems.However,existing detection methods have not yet made a good tradeoff between Bit Error Rate(BER)and computational complexity,resulting in slow convergence or high complexity.To address this issue,a low-complexity Approximate Message Passing(AMP)detection algorithm with Deep Neural Network(DNN)(denoted as AMP-DNN)is investigated in this paper.Firstly,an efficient AMP detection algorithm is derived by scalarizing the simplification of Belief Propagation(BP)algorithm.Secondly,by unfolding the obtained AMP detection algorithm,a DNN is specifically designed for the optimal performance gain.For the proposed AMP-DNN,the number of trainable parameters is only related to that of layers,regardless of modulation scheme,antenna number and matrix calculation,thus facilitating fast and stable training of the network.In addition,the AMP-DNN can detect different channels under the same distribution with only one training.The superior performance of the AMP-DNN is also verified by theoretical analysis and experiments.It is found that the proposed algorithm enables the reduction of BER without signal prior information,especially in the spatially correlated channel,and has a lower computational complexity compared with existing state-of-the-art methods. 展开更多
关键词 Massive MIMO system Approximate message passing(AMP)detection algorithm deep neural network(DNN) Bit error rate(BER) LOW-COMPLEXITY
在线阅读 下载PDF
Intelligent Fast Cell Association Scheme Based on Deep Q-Learning in Ultra-Dense Cellular Networks 被引量:1
4
作者 Jinhua Pan Lusheng Wang +2 位作者 Hai Lin Zhiheng Zha Caihong Kai 《China Communications》 SCIE CSCD 2021年第2期259-270,共12页
To support dramatically increased traffic loads,communication networks become ultra-dense.Traditional cell association(CA)schemes are timeconsuming,forcing researchers to seek fast schemes.This paper proposes a deep Q... To support dramatically increased traffic loads,communication networks become ultra-dense.Traditional cell association(CA)schemes are timeconsuming,forcing researchers to seek fast schemes.This paper proposes a deep Q-learning based scheme,whose main idea is to train a deep neural network(DNN)to calculate the Q values of all the state-action pairs and the cell holding the maximum Q value is associated.In the training stage,the intelligent agent continuously generates samples through the trial-anderror method to train the DNN until convergence.In the application stage,state vectors of all the users are inputted to the trained DNN to quickly obtain a satisfied CA result of a scenario with the same BS locations and user distribution.Simulations demonstrate that the proposed scheme provides satisfied CA results in a computational time several orders of magnitudes shorter than traditional schemes.Meanwhile,performance metrics,such as capacity and fairness,can be guaranteed. 展开更多
关键词 ultra-dense cellular networks(UDCN) cell association(CA) deep q-learning proportional fairness q-learning
在线阅读 下载PDF
Deep Q-Learning Based Optimal Query Routing Approach for Unstructured P2P Network 被引量:1
5
作者 Mohammad Shoab Abdullah Shawan Alotaibi 《Computers, Materials & Continua》 SCIE EI 2022年第3期5765-5781,共17页
Deep Reinforcement Learning(DRL)is a class of Machine Learning(ML)that combines Deep Learning with Reinforcement Learning and provides a framework by which a system can learn from its previous actions in an environmen... Deep Reinforcement Learning(DRL)is a class of Machine Learning(ML)that combines Deep Learning with Reinforcement Learning and provides a framework by which a system can learn from its previous actions in an environment to select its efforts in the future efficiently.DRL has been used in many application fields,including games,robots,networks,etc.for creating autonomous systems that improve themselves with experience.It is well acknowledged that DRL is well suited to solve optimization problems in distributed systems in general and network routing especially.Therefore,a novel query routing approach called Deep Reinforcement Learning based Route Selection(DRLRS)is proposed for unstructured P2P networks based on a Deep Q-Learning algorithm.The main objective of this approach is to achieve better retrieval effectiveness with reduced searching cost by less number of connected peers,exchangedmessages,and reduced time.The simulation results shows a significantly improve searching a resource with compression to k-Random Walker and Directed BFS.Here,retrieval effectiveness,search cost in terms of connected peers,and average overhead are 1.28,106,149,respectively. 展开更多
关键词 Reinforcement learning deep q-learning unstructured p2p network query routing
在线阅读 下载PDF
Deep neural network algorithm for estimating maize biomass based on simulated Sentinel 2A vegetation indices and leaf area index 被引量:13
6
作者 Xiuliang Jin Zhenhai Li +2 位作者 Haikuan Feng Zhibin Ren Shaokun Li 《The Crop Journal》 SCIE CAS CSCD 2020年第1期87-97,共11页
Accurate estimation of biomass is necessary for evaluating crop growth and predicting crop yield.Biomass is also a key trait in increasing grain yield by crop breeding.The aims of this study were(i)to identify the bes... Accurate estimation of biomass is necessary for evaluating crop growth and predicting crop yield.Biomass is also a key trait in increasing grain yield by crop breeding.The aims of this study were(i)to identify the best vegetation indices for estimating maize biomass,(ii)to investigate the relationship between biomass and leaf area index(LAI)at several growth stages,and(iii)to evaluate a biomass model using measured vegetation indices or simulated vegetation indices of Sentinel 2A and LAI using a deep neural network(DNN)algorithm.The results showed that biomass was associated with all vegetation indices.The three-band water index(TBWI)was the best vegetation index for estimating biomass and the corresponding R2,RMSE,and RRMSE were 0.76,2.84 t ha−1,and 38.22%respectively.LAI was highly correlated with biomass(R2=0.89,RMSE=2.27 t ha−1,and RRMSE=30.55%).Estimated biomass based on 15 hyperspectral vegetation indices was in a high agreement with measured biomass using the DNN algorithm(R2=0.83,RMSE=1.96 t ha−1,and RRMSE=26.43%).Biomass estimation accuracy was further increased when LAI was combined with the 15 vegetation indices(R2=0.91,RMSE=1.49 t ha−1,and RRMSE=20.05%).Relationships between the hyperspectral vegetation indices and biomass differed from relationships between simulated Sentinel 2A vegetation indices and biomass.Biomass estimation from the hyperspectral vegetation indices was more accurate than that from the simulated Sentinel 2A vegetation indices(R2=0.87,RMSE=1.84 t ha−1,and RRMSE=24.76%).The DNN algorithm was effective in improving the estimation accuracy of biomass.It provides a guideline for estimating biomass of maize using remote sensing technology and the DNN algorithm in this region. 展开更多
关键词 Biomass estimation MAIZE Vegetation indices deep neural network algorithm LAI
在线阅读 下载PDF
Surface wave inversion with unknown number of soil layers based on a hybrid learning procedure of deep learning and genetic algorithm
7
作者 Zan Zhou Thomas Man-Hoi Lok Wan-Huan Zhou 《Earthquake Engineering and Engineering Vibration》 SCIE EI CSCD 2024年第2期345-358,共14页
Surface wave inversion is a key step in the application of surface waves to soil velocity profiling.Currently,a common practice for the process of inversion is that the number of soil layers is assumed to be known bef... Surface wave inversion is a key step in the application of surface waves to soil velocity profiling.Currently,a common practice for the process of inversion is that the number of soil layers is assumed to be known before using heuristic search algorithms to compute the shear wave velocity profile or the number of soil layers is considered as an optimization variable.However,an improper selection of the number of layers may lead to an incorrect shear wave velocity profile.In this study,a deep learning and genetic algorithm hybrid learning procedure is proposed to perform the surface wave inversion without the need to assume the number of soil layers.First,a deep neural network is adapted to learn from a large number of synthetic dispersion curves for inferring the layer number.Then,the shear-wave velocity profile is determined by a genetic algorithm with the known layer number.By applying this procedure to both simulated and real-world cases,the results indicate that the proposed method is reliable and efficient for surface wave inversion. 展开更多
关键词 surface wave inversion analysis shear-wave velocity profile deep neural network genetic algorithm
在线阅读 下载PDF
Extended Deep Learning Algorithm for Improved Brain Tumor Diagnosis System
8
作者 M.Adimoolam K.Maithili +7 位作者 N.M.Balamurugan R.Rajkumar S.Leelavathy Raju Kannadasan Mohd Anul Haq Ilyas Khan ElSayed M.Tag El Din Arfat Ahmad Khan 《Intelligent Automation & Soft Computing》 2024年第1期33-55,共23页
At present,the prediction of brain tumors is performed using Machine Learning(ML)and Deep Learning(DL)algorithms.Although various ML and DL algorithms are adapted to predict brain tumors to some range,some concerns st... At present,the prediction of brain tumors is performed using Machine Learning(ML)and Deep Learning(DL)algorithms.Although various ML and DL algorithms are adapted to predict brain tumors to some range,some concerns still need enhancement,particularly accuracy,sensitivity,false positive and false negative,to improve the brain tumor prediction system symmetrically.Therefore,this work proposed an Extended Deep Learning Algorithm(EDLA)to measure performance parameters such as accuracy,sensitivity,and false positive and false negative rates.In addition,these iterated measures were analyzed by comparing the EDLA method with the Convolutional Neural Network(CNN)way further using the SPSS tool,and respective graphical illustrations were shown.The results were that the mean performance measures for the proposed EDLA algorithm were calculated,and those measured were accuracy(97.665%),sensitivity(97.939%),false positive(3.012%),and false negative(3.182%)for ten iterations.Whereas in the case of the CNN,the algorithm means accuracy gained was 94.287%,mean sensitivity 95.612%,mean false positive 5.328%,and mean false negative 4.756%.These results show that the proposed EDLA method has outperformed existing algorithms,including CNN,and ensures symmetrically improved parameters.Thus EDLA algorithm introduces novelty concerning its performance and particular activation function.This proposed method will be utilized effectively in brain tumor detection in a precise and accurate manner.This algorithm would apply to brain tumor diagnosis and be involved in various medical diagnoses aftermodification.If the quantity of dataset records is enormous,then themethod’s computation power has to be updated. 展开更多
关键词 Brain tumor extended deep learning algorithm convolution neural network tumor detection deep learning
在线阅读 下载PDF
基于改进DeepLabv3+卷积神经网络的废钢智能判定算法
9
作者 吉孟扬 施凯旋 +1 位作者 郭宇 杨博晟 《轧钢》 北大核心 2025年第5期150-158,共9页
废钢等级判定是实现钢铁合理循环利用的关键环节。针对现有废钢判定方法检测精度不足、效率较低等问题,本文提出了一种基于改进DeepLabv3+卷积神经网络的废钢智能判定算法,该算法在空洞空间金字塔池化(ASPP)层后增加混合注意力机制,并... 废钢等级判定是实现钢铁合理循环利用的关键环节。针对现有废钢判定方法检测精度不足、效率较低等问题,本文提出了一种基于改进DeepLabv3+卷积神经网络的废钢智能判定算法,该算法在空洞空间金字塔池化(ASPP)层后增加混合注意力机制,并使用深度条带空洞卷积代替ASPP层中部分空洞卷积;通过构建不同料型、不同视角、不同时间段等实际场景的废钢堆图像数据集,训练获得了废钢智能判定模型。改进型算法能有效提升网络的检测精度,在以ResNet作为主干网的对照组中,平均交并比m_(IoU)提升约2.54%,在以Xception作为主干网的对照组中,m_(IoU)提升约4.42%,有效提高了废钢语义分割精度;通过厚度和距离两因素建立转换模型,完成各类废钢在图片中占据的像素点占比到实际质量占比的转换,并使用全连接网络方式将算法得出的结果和工人实际结果进行拟合。本文使用大量数据对所提出的模型进行实验,实验结果表明:本文模型判定精度能够达到93.75%,明显优于现有方法,并且能够满足实际生产需要。 展开更多
关键词 废钢 深度学习 语义分割 deepLabv3+卷积神经网络 智能算法
原文传递
Directional Routing Algorithm for Deep Space Optical Network
10
作者 Lei Guo Xiaorui Wang +3 位作者 Yejun Liu Pengchao Han Yamin Xie Yuchen Tan 《China Communications》 SCIE CSCD 2017年第1期158-168,共11页
With the development of science, economy and society, the needs for research and exploration of deep space have entered a rapid and stable development stage. Deep Space Optical Network(DSON) is expected to become an i... With the development of science, economy and society, the needs for research and exploration of deep space have entered a rapid and stable development stage. Deep Space Optical Network(DSON) is expected to become an important foundation and inevitable development trend of future deepspace communication. In this paper, we design a deep space node model which is capable of combining the space division multiplexing with frequency division multiplexing. Furthermore, we propose the directional flooding routing algorithm(DFRA) for DSON based on our node model. This scheme selectively forwards the data packets in the routing, so that the energy consumption can be reduced effectively because only a portion of nodes will participate the flooding routing. Simulation results show that, compared with traditional flooding routing algorithm(TFRA), the DFRA can avoid the non-directional and blind transmission. Therefore, the energy consumption in message routing will be reduced and the lifespan of DSON can also be prolonged effectively. Although the complexity of routing implementation is slightly increased compared with TFRA, the energy of nodes can be saved and the transmission rate is obviously improved in DFRA. Thus the overall performance of DSON can be significantly improved. 展开更多
关键词 deep space optical network routing algorithm directional flooding routing algorithm traditional flooding routing algorithm
在线阅读 下载PDF
Research on the Application of the Radiative Transfer Model Based on Deep Neural Network in One-dimensional Variational Algorithm
11
作者 HE Qiu-rui ZHANG Rui-ling +1 位作者 LI Jiao-yang WANG Zhen-zhan 《Journal of Tropical Meteorology》 SCIE 2022年第3期326-342,共17页
As a typical physical retrieval algorithm for retrieving atmospheric parameters,one-dimensional variational(1 DVAR)algorithm is widely used in various climate and meteorological communities and enjoys an important pos... As a typical physical retrieval algorithm for retrieving atmospheric parameters,one-dimensional variational(1 DVAR)algorithm is widely used in various climate and meteorological communities and enjoys an important position in the field of microwave remote sensing.Among algorithm parameters affecting the performance of the 1 DVAR algorithm,the accuracy of the microwave radiative transfer model for calculating the simulated brightness temperature is the fundamental constraint on the retrieval accuracies of the 1 DVAR algorithm for retrieving atmospheric parameters.In this study,a deep neural network(DNN)is used to describe the nonlinear relationship between atmospheric parameters and satellite-based microwave radiometer observations,and a DNN-based radiative transfer model is developed and applied to the 1 DVAR algorithm to carry out retrieval experiments of the atmospheric temperature and humidity profiles.The retrieval results of the temperature and humidity profiles from the Microwave Humidity and Temperature Sounder(MWHTS)onboard the Feng-Yun-3(FY-3)satellite show that the DNN-based radiative transfer model can obtain higher accuracy for simulating MWHTS observations than that of the operational radiative transfer model RTTOV,and also enables the 1 DVAR algorithm to obtain higher retrieval accuracies of the temperature and humidity profiles.In this study,the DNN-based radiative transfer model applied to the 1 DVAR algorithm can fundamentally improve the retrieval accuracies of atmospheric parameters,which may provide important reference for various applied studies in atmospheric sciences. 展开更多
关键词 one-dimensional variational algorithm radiative transfer model deep neural network FY-3 MWHTS temperature and humidity profiles
在线阅读 下载PDF
Adaptive Butterfly Optimization Algorithm(ABOA)Based Feature Selection and Deep Neural Network(DNN)for Detection of Distributed Denial-of-Service(DDoS)Attacks in Cloud
12
作者 S.Sureshkumar G.K.D.Prasanna Venkatesan R.Santhosh 《Computer Systems Science & Engineering》 SCIE EI 2023年第10期1109-1123,共15页
Cloud computing technology provides flexible,on-demand,and completely controlled computing resources and services are highly desirable.Despite this,with its distributed and dynamic nature and shortcomings in virtualiz... Cloud computing technology provides flexible,on-demand,and completely controlled computing resources and services are highly desirable.Despite this,with its distributed and dynamic nature and shortcomings in virtualization deployment,the cloud environment is exposed to a wide variety of cyber-attacks and security difficulties.The Intrusion Detection System(IDS)is a specialized security tool that network professionals use for the safety and security of the networks against attacks launched from various sources.DDoS attacks are becoming more frequent and powerful,and their attack pathways are continually changing,which requiring the development of new detection methods.Here the purpose of the study is to improve detection accuracy.Feature Selection(FS)is critical.At the same time,the IDS’s computational problem is limited by focusing on the most relevant elements,and its performance and accuracy increase.In this research work,the suggested Adaptive butterfly optimization algorithm(ABOA)framework is used to assess the effectiveness of a reduced feature subset during the feature selection phase,that was motivated by this motive Candidates.Accurate classification is not compromised by using an ABOA technique.The design of Deep Neural Networks(DNN)has simplified the categorization of network traffic into normal and DDoS threat traffic.DNN’s parameters can be finetuned to detect DDoS attacks better using specially built algorithms.Reduced reconstruction error,no exploding or vanishing gradients,and reduced network are all benefits of the changes outlined in this paper.When it comes to performance criteria like accuracy,precision,recall,and F1-Score are the performance measures that show the suggested architecture outperforms the other existing approaches.Hence the proposed ABOA+DNN is an excellent method for obtaining accurate predictions,with an improved accuracy rate of 99.05%compared to other existing approaches. 展开更多
关键词 Cloud computing distributed denial of service intrusion detection system adaptive butterfly optimization algorithm deep neural network
在线阅读 下载PDF
Optimization of convolutional neural networks for predicting water pollutants using spectral data in the middle and lower reaches of the Yangtze River Basin,China
13
作者 ZHANG Guohao LI Song +3 位作者 WANG Cailing WANG Hongwei YU Tao DAI Xiaoxu 《Journal of Mountain Science》 2025年第8期2851-2869,共19页
Developing an accurate and efficient comprehensive water quality prediction model and its assessment method is crucial for the prevention and control of water pollution.Deep learning(DL),as one of the most promising t... Developing an accurate and efficient comprehensive water quality prediction model and its assessment method is crucial for the prevention and control of water pollution.Deep learning(DL),as one of the most promising technologies today,plays a crucial role in the effective assessment of water body health,which is essential for water resource management.This study models using both the original dataset and a dataset augmented with Generative Adversarial Networks(GAN).It integrates optimization algorithms(OA)with Convolutional Neural Networks(CNN)to propose a comprehensive water quality model evaluation method aiming at identifying the optimal models for different pollutants.Specifically,after preprocessing the spectral dataset,data augmentation was conducted to obtain two datasets.Then,six new models were developed on these datasets using particle swarm optimization(PSO),genetic algorithm(GA),and simulated annealing(SA)combined with CNN to simulate and forecast the concentrations of three water pollutants:Chemical Oxygen Demand(COD),Total Nitrogen(TN),and Total Phosphorus(TP).Finally,seven model evaluation methods,including uncertainty analysis,were used to evaluate the constructed models and select the optimal models for the three pollutants.The evaluation results indicate that the GPSCNN model performed best in predicting COD and TP concentrations,while the GGACNN model excelled in TN concentration prediction.Compared to existing technologies,the proposed models and evaluation methods provide a more comprehensive and rapid approach to water body prediction and assessment,offering new insights and methods for water pollution prevention and control. 展开更多
关键词 Water pollutants Convolutional neural networks Data augmentation Optimization algorithms Model evaluation methods deep Learning
原文传递
Hybrid deep learning and isogeometric analysis for bearing capacity assessment of sand over clay
14
作者 Toan Nguyen-Minh Tram Bui-Ngoc +2 位作者 Jim Shiau Tan Nguyen Trung Nguyen-Thoi 《Journal of Rock Mechanics and Geotechnical Engineering》 2025年第8期5240-5265,共26页
In this paper,Isogeometric analysis(IGA)is effectively integrated with machine learning(ML)to investigate the bearing capacity of strip footings in layered soil profiles,with a focus on a sand-over-clay configuration.... In this paper,Isogeometric analysis(IGA)is effectively integrated with machine learning(ML)to investigate the bearing capacity of strip footings in layered soil profiles,with a focus on a sand-over-clay configuration.The study begins with the generation of a comprehensive dataset of 10,000 samples from IGA upper bound(UB)limit analyses,facilitating an in-depth examination of various material and geometric conditions.A hybrid deep neural network,specifically the Whale Optimization Algorithm-Deep Neural Network(WOA-DNN),is then employed to utilize these 10,000 outputs for precise bearing capacity predictions.Notably,the WOA-DNN model outperforms conventional ML techniques,offering a robust and accurate prediction tool.This innovative approach explores a broad range of design parameters,including sand layer depth,load-to-soil unit weight ratio,internal friction angle,cohesion,and footing roughness.A detailed analysis of the dataset reveals the significant influence of these parameters on bearing capacity,providing valuable insights for practical foundation design.This research demonstrates the usefulness of data-driven techniques in optimizing the design of shallow foundations within layered soil profiles,marking a significant stride in geotechnical engineering advancements. 展开更多
关键词 UB limit analysis Isogeometric analysis(IGA) Hybrid deep neural network Whale optimization algorithm
在线阅读 下载PDF
Deep Reinforcement Learning-Based URLLC-Aware Task Offloading in Collaborative Vehicular Networks 被引量:5
15
作者 Chao Pan Zhao Wang +1 位作者 Zhenyu Zhou Xincheng Ren 《China Communications》 SCIE CSCD 2021年第7期134-146,共13页
Collaborative vehicular networks is a key enabler to meet the stringent ultra-reliable and lowlatency communications(URLLC)requirements.A user vehicle(UV)dynamically optimizes task offloading by exploiting its collabo... Collaborative vehicular networks is a key enabler to meet the stringent ultra-reliable and lowlatency communications(URLLC)requirements.A user vehicle(UV)dynamically optimizes task offloading by exploiting its collaborations with edge servers and vehicular fog servers(VFSs).However,the optimization of task offloading in highly dynamic collaborative vehicular networks faces several challenges such as URLLC guaranteeing,incomplete information,and dimensionality curse.In this paper,we first characterize URLLC in terms of queuing delay bound violation and high-order statistics of excess backlogs.Then,a Deep Reinforcement lEarning-based URLLCAware task offloading algorithM named DREAM is proposed to maximize the throughput of the UVs while satisfying the URLLC constraints in a besteffort way.Compared with existing task offloading algorithms,DREAM achieves superior performance in throughput,queuing delay,and URLLC. 展开更多
关键词 collaborative vehicular networks task of-floading URLLC awareness deep q-learning
在线阅读 下载PDF
Optimizing Deep Learning Parameters Using Genetic Algorithm for Object Recognition and Robot Grasping 被引量:2
16
作者 Delowar Hossain Genci Capi Mitsuru Jindai 《Journal of Electronic Science and Technology》 CAS CSCD 2018年第1期11-15,共5页
The performance of deep learning(DL)networks has been increased by elaborating the network structures. However, the DL netowrks have many parameters, which have a lot of influence on the performance of the network. We... The performance of deep learning(DL)networks has been increased by elaborating the network structures. However, the DL netowrks have many parameters, which have a lot of influence on the performance of the network. We propose a genetic algorithm(GA) based deep belief neural network(DBNN) method for robot object recognition and grasping purpose. This method optimizes the parameters of the DBNN method, such as the number of hidden units, the number of epochs, and the learning rates, which would reduce the error rate and the network training time of object recognition. After recognizing objects, the robot performs the pick-andplace operations. We build a database of six objects for experimental purpose. Experimental results demonstrate that our method outperforms on the optimized robot object recognition and grasping tasks. 展开更多
关键词 deep learning(DL) deep belief neural network(DBNN) genetic algorithm(GA) object recognition robot grasping
在线阅读 下载PDF
An intelligent task offloading algorithm(iTOA)for UAV edge computing network 被引量:8
17
作者 Jienan Chen Siyu Chen +3 位作者 Siyu Luo Qi Wang Bin Cao Xiaoqian Li 《Digital Communications and Networks》 SCIE 2020年第4期433-443,共11页
Unmanned Aerial Vehicle(UAV)has emerged as a promising technology for the support of human activities,such as target tracking,disaster rescue,and surveillance.However,these tasks require a large computation load of im... Unmanned Aerial Vehicle(UAV)has emerged as a promising technology for the support of human activities,such as target tracking,disaster rescue,and surveillance.However,these tasks require a large computation load of image or video processing,which imposes enormous pressure on the UAV computation platform.To solve this issue,in this work,we propose an intelligent Task Offloading Algorithm(iTOA)for UAV edge computing network.Compared with existing methods,iTOA is able to perceive the network’s environment intelligently to decide the offloading action based on deep Monte Calor Tree Search(MCTS),the core algorithm of Alpha Go.MCTS will simulate the offloading decision trajectories to acquire the best decision by maximizing the reward,such as lowest latency or power consumption.To accelerate the search convergence of MCTS,we also proposed a splitting Deep Neural Network(sDNN)to supply the prior probability for MCTS.The sDNN is trained by a self-supervised learning manager.Here,the training data set is obtained from iTOA itself as its own teacher.Compared with game theory and greedy search-based methods,the proposed iTOA improves service latency performance by 33%and 60%,respectively. 展开更多
关键词 Unmanned aerial vehicles(UAVs) Mobile edge computing(MEC) Intelligent task offloading algorithm(iTOA) Monte Carlo tree search(MCTS) deep reinforcement learning Splitting deep neural network(sDNN)
在线阅读 下载PDF
Optimal Deep Dense Convolutional Neural Network Based Classification Model for COVID-19 Disease 被引量:1
18
作者 A.Sheryl Oliver P.Suresh +2 位作者 A.Mohanarathinam Seifedine Kadry Orawit Thinnukool 《Computers, Materials & Continua》 SCIE EI 2022年第1期2031-2047,共17页
Early diagnosis and detection are important tasks in controlling the spread of COVID-19.A number of Deep Learning techniques has been established by researchers to detect the presence of COVID-19 using CT scan images ... Early diagnosis and detection are important tasks in controlling the spread of COVID-19.A number of Deep Learning techniques has been established by researchers to detect the presence of COVID-19 using CT scan images and X-rays.However,these methods suffer from biased results and inaccurate detection of the disease.So,the current research article developed Oppositional-based Chimp Optimization Algorithm and Deep Dense Convolutional Neural Network(OCOA-DDCNN)for COVID-19 prediction using CT images in IoT environment.The proposed methodology works on the basis of two stages such as pre-processing and prediction.Initially,CT scan images generated from prospective COVID-19 are collected from open-source system using IoT devices.The collected images are then preprocessed using Gaussian filter.Gaussian filter can be utilized in the removal of unwanted noise from the collected CT scan images.Afterwards,the preprocessed images are sent to prediction phase.In this phase,Deep Dense Convolutional Neural Network(DDCNN)is applied upon the pre-processed images.The proposed classifier is optimally designed with the consideration of Oppositional-basedChimp Optimization Algorithm(OCOA).This algorithm is utilized in the selection of optimal parameters for the proposed classifier.Finally,the proposed technique is used in the prediction of COVID-19 and classify the results as either COVID-19 or non-COVID-19.The projected method was implemented in MATLAB and the performances were evaluated through statistical measurements.The proposed method was contrasted with conventional techniques such as Convolutional Neural Network-Firefly Algorithm(CNN-FA),Emperor Penguin Optimization(CNN-EPO)respectively.The results established the supremacy of the proposed model. 展开更多
关键词 deep learning deep dense convolutional neural network covid-19 CT images chimp optimization algorithm
在线阅读 下载PDF
Improved Archimedes Optimization Algorithm with Deep Learning Empowered Fall Detection System 被引量:1
19
作者 Ala Saleh Alluhaidan Masoud Alajmi +3 位作者 Fahd N.Al-Wesabi Anwer Mustafa Hilal Manar Ahmed Hamza Abdelwahed Motwakel 《Computers, Materials & Continua》 SCIE EI 2022年第8期2713-2727,共15页
Human fall detection(FD)acts as an important part in creating sensor based alarm system,enabling physical therapists to minimize the effect of fall events and save human lives.Generally,elderly people suffer from seve... Human fall detection(FD)acts as an important part in creating sensor based alarm system,enabling physical therapists to minimize the effect of fall events and save human lives.Generally,elderly people suffer from several diseases,and fall action is a common situation which can occur at any time.In this view,this paper presents an Improved Archimedes Optimization Algorithm with Deep Learning Empowered Fall Detection(IAOA-DLFD)model to identify the fall/non-fall events.The proposed IAOA-DLFD technique comprises different levels of pre-processing to improve the input image quality.Besides,the IAOA with Capsule Network based feature extractor is derived to produce an optimal set of feature vectors.In addition,the IAOA uses to significantly boost the overall FD performance by optimal choice of CapsNet hyperparameters.Lastly,radial basis function(RBF)network is applied for determining the proper class labels of the test images.To showcase the enhanced performance of the IAOA-DLFD technique,a wide range of experiments are executed and the outcomes stated the enhanced detection outcome of the IAOA-DLFD approach over the recent methods with the accuracy of 0.997. 展开更多
关键词 Fall detection intelligent model deep learning archimedes optimization algorithm capsule network
在线阅读 下载PDF
Power System Resiliency and Wide Area Control Employing Deep Learning Algorithm 被引量:1
20
作者 Pandia Rajan Jeyaraj Aravind Chellachi Kathiresan +3 位作者 Siva Prakash Asokan Edward Rajan Samuel Nadar Hegazy Rezk Thanikanti Sudhakar Babu 《Computers, Materials & Continua》 SCIE EI 2021年第7期553-567,共15页
The power transfer capability of the smart transmission gridconnected networks needs to be reduced by inter-area oscillations.Due to the fact that inter-area modes of oscillations detain and make instability of power ... The power transfer capability of the smart transmission gridconnected networks needs to be reduced by inter-area oscillations.Due to the fact that inter-area modes of oscillations detain and make instability of power transmission networks.This fact is more noticeable in smart grid-connected systems.The smart grid infrastructure has more renewable energy resources installed for its operation.To overcome this problem,a deep learning widearea controller is proposed for real-time parameter control and smart power grid resilience on oscillations inter-area modes.The proposed Deep Wide Area Controller(DWAC)uses the Deep Belief Network(DBN).The network weights are updated based on real-time data from Phasor measurement units.Resilience assessment based on failure probability,financial impact,and time-series data in grid failure management determine the norm H2.To demonstrate the effectiveness of the proposed framework,a time-domain simulation case study based on the IEEE-39 bus system was performed.For a one-channel attack on the test system,the resiliency index increased to 0.962,and inter-area dampingξwas reduced to 0.005.The obtained results validate the proposed deep learning algorithm’s efficiency on damping inter-area and local oscillation on the 2-channel attack as well.Results also offer robust management of power system resilience and timely control of the operating conditions. 展开更多
关键词 Neural network deep learning algorithm low-frequency oscillation resiliency assessment smart grid wide-area control
在线阅读 下载PDF
上一页 1 2 59 下一页 到第
使用帮助 返回顶部