期刊文献+
共找到1,201篇文章
< 1 2 61 >
每页显示 20 50 100
Intelligent Fast Cell Association Scheme Based on Deep Q-Learning in Ultra-Dense Cellular Networks 被引量:1
1
作者 Jinhua Pan Lusheng Wang +2 位作者 Hai Lin Zhiheng Zha Caihong Kai 《China Communications》 SCIE CSCD 2021年第2期259-270,共12页
To support dramatically increased traffic loads,communication networks become ultra-dense.Traditional cell association(CA)schemes are timeconsuming,forcing researchers to seek fast schemes.This paper proposes a deep Q... To support dramatically increased traffic loads,communication networks become ultra-dense.Traditional cell association(CA)schemes are timeconsuming,forcing researchers to seek fast schemes.This paper proposes a deep Q-learning based scheme,whose main idea is to train a deep neural network(DNN)to calculate the Q values of all the state-action pairs and the cell holding the maximum Q value is associated.In the training stage,the intelligent agent continuously generates samples through the trial-anderror method to train the DNN until convergence.In the application stage,state vectors of all the users are inputted to the trained DNN to quickly obtain a satisfied CA result of a scenario with the same BS locations and user distribution.Simulations demonstrate that the proposed scheme provides satisfied CA results in a computational time several orders of magnitudes shorter than traditional schemes.Meanwhile,performance metrics,such as capacity and fairness,can be guaranteed. 展开更多
关键词 ultra-dense cellular networks(UDCN) cell association(CA) deep q-learning proportional fairness q-learning
在线阅读 下载PDF
Deep Q-Learning Based Optimal Query Routing Approach for Unstructured P2P Network 被引量:1
2
作者 Mohammad Shoab Abdullah Shawan Alotaibi 《Computers, Materials & Continua》 SCIE EI 2022年第3期5765-5781,共17页
Deep Reinforcement Learning(DRL)is a class of Machine Learning(ML)that combines Deep Learning with Reinforcement Learning and provides a framework by which a system can learn from its previous actions in an environmen... Deep Reinforcement Learning(DRL)is a class of Machine Learning(ML)that combines Deep Learning with Reinforcement Learning and provides a framework by which a system can learn from its previous actions in an environment to select its efforts in the future efficiently.DRL has been used in many application fields,including games,robots,networks,etc.for creating autonomous systems that improve themselves with experience.It is well acknowledged that DRL is well suited to solve optimization problems in distributed systems in general and network routing especially.Therefore,a novel query routing approach called Deep Reinforcement Learning based Route Selection(DRLRS)is proposed for unstructured P2P networks based on a Deep Q-Learning algorithm.The main objective of this approach is to achieve better retrieval effectiveness with reduced searching cost by less number of connected peers,exchangedmessages,and reduced time.The simulation results shows a significantly improve searching a resource with compression to k-Random Walker and Directed BFS.Here,retrieval effectiveness,search cost in terms of connected peers,and average overhead are 1.28,106,149,respectively. 展开更多
关键词 Reinforcement learning deep q-learning unstructured p2p network query routing
在线阅读 下载PDF
Deep neural network algorithm for estimating maize biomass based on simulated Sentinel 2A vegetation indices and leaf area index 被引量:15
3
作者 Xiuliang Jin Zhenhai Li +2 位作者 Haikuan Feng Zhibin Ren Shaokun Li 《The Crop Journal》 SCIE CAS CSCD 2020年第1期87-97,共11页
Accurate estimation of biomass is necessary for evaluating crop growth and predicting crop yield.Biomass is also a key trait in increasing grain yield by crop breeding.The aims of this study were(i)to identify the bes... Accurate estimation of biomass is necessary for evaluating crop growth and predicting crop yield.Biomass is also a key trait in increasing grain yield by crop breeding.The aims of this study were(i)to identify the best vegetation indices for estimating maize biomass,(ii)to investigate the relationship between biomass and leaf area index(LAI)at several growth stages,and(iii)to evaluate a biomass model using measured vegetation indices or simulated vegetation indices of Sentinel 2A and LAI using a deep neural network(DNN)algorithm.The results showed that biomass was associated with all vegetation indices.The three-band water index(TBWI)was the best vegetation index for estimating biomass and the corresponding R2,RMSE,and RRMSE were 0.76,2.84 t ha−1,and 38.22%respectively.LAI was highly correlated with biomass(R2=0.89,RMSE=2.27 t ha−1,and RRMSE=30.55%).Estimated biomass based on 15 hyperspectral vegetation indices was in a high agreement with measured biomass using the DNN algorithm(R2=0.83,RMSE=1.96 t ha−1,and RRMSE=26.43%).Biomass estimation accuracy was further increased when LAI was combined with the 15 vegetation indices(R2=0.91,RMSE=1.49 t ha−1,and RRMSE=20.05%).Relationships between the hyperspectral vegetation indices and biomass differed from relationships between simulated Sentinel 2A vegetation indices and biomass.Biomass estimation from the hyperspectral vegetation indices was more accurate than that from the simulated Sentinel 2A vegetation indices(R2=0.87,RMSE=1.84 t ha−1,and RRMSE=24.76%).The DNN algorithm was effective in improving the estimation accuracy of biomass.It provides a guideline for estimating biomass of maize using remote sensing technology and the DNN algorithm in this region. 展开更多
关键词 Biomass estimation MAIZE Vegetation indices deep neural network algorithm LAI
在线阅读 下载PDF
The Blockchain Neural Network Superior to Deep Learning for Improving the Trust of Supply Chain
4
作者 Hsiao-Chun Han Der-Chen Huang 《Computer Modeling in Engineering & Sciences》 2025年第6期3921-3941,共21页
With the increasing importance of supply chain transparency,blockchain-based data has emerged as a valuable and verifiable source for analyzing procurement transaction risks.This study extends the mathematical model a... With the increasing importance of supply chain transparency,blockchain-based data has emerged as a valuable and verifiable source for analyzing procurement transaction risks.This study extends the mathematical model and proof of‘the Overall Performance Characteristics of the Supply Chain’to encompass multiple variables within blockchain data.Utilizing graph theory,the model is further developed into a single-layer neural network,which serves as the foundation for constructing two multi-layer deep learning neural network models,Feedforward Neural Network(abbreviated as FNN)and Deep Clustering Network(abbreviated as DCN).Furthermore,this study retrieves corporate data from the Chunghwa Yellow Pages online resource and Taiwan Economic Journal database(abbreviated as TEJ).These data are then virtualized using‘the Metaverse Algorithm’,and the selected virtualized blockchain variables are utilized to train a neural network model for classification.The results demonstrate that a single-layer neural network model,leveraging blockchain data and employing the Proof of Relation algorithm(abbreviated as PoR)as the activation function,effectively identifies anomalous enterprises,which constitute 7.2%of the total sample,aligning with expectations.In contrast,the multi-layer neural network models,DCN and FNN,classify an excessively large proportion of enterprises as anomalous(ranging from one-fourth to one-third),which deviates from expectations.This indicates that deep learning may still be inadequate in effectively capturing or identifying malicious corporate behaviors associated with distortions in procurement transaction data.In other words,procurement transaction blockchain data possesses intrinsic value that cannot be replaced by artificial intelligence(abbreviated as AI). 展开更多
关键词 Blockchain neural network deep learning consensus algorithm supply chain management information security management
在线阅读 下载PDF
Directional Routing Algorithm for Deep Space Optical Network
5
作者 Lei Guo Xiaorui Wang +3 位作者 Yejun Liu Pengchao Han Yamin Xie Yuchen Tan 《China Communications》 SCIE CSCD 2017年第1期158-168,共11页
With the development of science, economy and society, the needs for research and exploration of deep space have entered a rapid and stable development stage. Deep Space Optical Network(DSON) is expected to become an i... With the development of science, economy and society, the needs for research and exploration of deep space have entered a rapid and stable development stage. Deep Space Optical Network(DSON) is expected to become an important foundation and inevitable development trend of future deepspace communication. In this paper, we design a deep space node model which is capable of combining the space division multiplexing with frequency division multiplexing. Furthermore, we propose the directional flooding routing algorithm(DFRA) for DSON based on our node model. This scheme selectively forwards the data packets in the routing, so that the energy consumption can be reduced effectively because only a portion of nodes will participate the flooding routing. Simulation results show that, compared with traditional flooding routing algorithm(TFRA), the DFRA can avoid the non-directional and blind transmission. Therefore, the energy consumption in message routing will be reduced and the lifespan of DSON can also be prolonged effectively. Although the complexity of routing implementation is slightly increased compared with TFRA, the energy of nodes can be saved and the transmission rate is obviously improved in DFRA. Thus the overall performance of DSON can be significantly improved. 展开更多
关键词 deep space optical network routing algorithm directional flooding routing algorithm traditional flooding routing algorithm
在线阅读 下载PDF
Effective Controller Placement in Software-Defined Internet-of-Things Leveraging Deep Q-Learning (DQL)
6
作者 Jehad Ali Mohammed J.F.Alenazi 《Computers, Materials & Continua》 SCIE EI 2024年第12期4015-4032,共18页
The controller is a main component in the Software-Defined Networking(SDN)framework,which plays a significant role in enabling programmability and orchestration for 5G and next-generation networks.In SDN,frequent comm... The controller is a main component in the Software-Defined Networking(SDN)framework,which plays a significant role in enabling programmability and orchestration for 5G and next-generation networks.In SDN,frequent communication occurs between network switches and the controller,which manages and directs traffic flows.If the controller is not strategically placed within the network,this communication can experience increased delays,negatively affecting network performance.Specifically,an improperly placed controller can lead to higher end-to-end(E2E)delay,as switches must traverse more hops or encounter greater propagation delays when communicating with the controller.This paper introduces a novel approach using Deep Q-Learning(DQL)to dynamically place controllers in Software-Defined Internet of Things(SD-IoT)environments,with the goal of minimizing E2E delay between switches and controllers.E2E delay,a crucial metric for network performance,is influenced by two key factors:hop count,which measures the number of network nodes data must traverse,and propagation delay,which accounts for the physical distance between nodes.Our approach models the controller placement problem as a Markov Decision Process(MDP).In this model,the network configuration at any given time is represented as a“state,”while“actions”correspond to potential decisions regarding the placement of controllers or the reassignment of switches to controllers.Using a Deep Q-Network(DQN)to approximate the Q-function,the system learns the optimal controller placement by maximizing the cumulative reward,which is defined as the negative of the E2E delay.Essentially,the lower the delay,the higher the reward the system receives,enabling it to continuously improve its controller placement strategy.The experimental results show that our DQL-based method significantly reduces E2E delay when compared to traditional benchmark placement strategies.By dynamically learning from the network’s real-time conditions,the proposed method ensures that controller placement remains efficient and responsive,reducing communication delays and enhancing overall network performance. 展开更多
关键词 Software-defined networking deep q-learning controller placement quality of service
在线阅读 下载PDF
Research on the Application of the Radiative Transfer Model Based on Deep Neural Network in One-dimensional Variational Algorithm
7
作者 HE Qiu-rui ZHANG Rui-ling +1 位作者 LI Jiao-yang WANG Zhen-zhan 《Journal of Tropical Meteorology》 SCIE 2022年第3期326-342,共17页
As a typical physical retrieval algorithm for retrieving atmospheric parameters,one-dimensional variational(1 DVAR)algorithm is widely used in various climate and meteorological communities and enjoys an important pos... As a typical physical retrieval algorithm for retrieving atmospheric parameters,one-dimensional variational(1 DVAR)algorithm is widely used in various climate and meteorological communities and enjoys an important position in the field of microwave remote sensing.Among algorithm parameters affecting the performance of the 1 DVAR algorithm,the accuracy of the microwave radiative transfer model for calculating the simulated brightness temperature is the fundamental constraint on the retrieval accuracies of the 1 DVAR algorithm for retrieving atmospheric parameters.In this study,a deep neural network(DNN)is used to describe the nonlinear relationship between atmospheric parameters and satellite-based microwave radiometer observations,and a DNN-based radiative transfer model is developed and applied to the 1 DVAR algorithm to carry out retrieval experiments of the atmospheric temperature and humidity profiles.The retrieval results of the temperature and humidity profiles from the Microwave Humidity and Temperature Sounder(MWHTS)onboard the Feng-Yun-3(FY-3)satellite show that the DNN-based radiative transfer model can obtain higher accuracy for simulating MWHTS observations than that of the operational radiative transfer model RTTOV,and also enables the 1 DVAR algorithm to obtain higher retrieval accuracies of the temperature and humidity profiles.In this study,the DNN-based radiative transfer model applied to the 1 DVAR algorithm can fundamentally improve the retrieval accuracies of atmospheric parameters,which may provide important reference for various applied studies in atmospheric sciences. 展开更多
关键词 one-dimensional variational algorithm radiative transfer model deep neural network FY-3 MWHTS temperature and humidity profiles
在线阅读 下载PDF
Adaptive Butterfly Optimization Algorithm(ABOA)Based Feature Selection and Deep Neural Network(DNN)for Detection of Distributed Denial-of-Service(DDoS)Attacks in Cloud
8
作者 S.Sureshkumar G.K.D.Prasanna Venkatesan R.Santhosh 《Computer Systems Science & Engineering》 SCIE EI 2023年第10期1109-1123,共15页
Cloud computing technology provides flexible,on-demand,and completely controlled computing resources and services are highly desirable.Despite this,with its distributed and dynamic nature and shortcomings in virtualiz... Cloud computing technology provides flexible,on-demand,and completely controlled computing resources and services are highly desirable.Despite this,with its distributed and dynamic nature and shortcomings in virtualization deployment,the cloud environment is exposed to a wide variety of cyber-attacks and security difficulties.The Intrusion Detection System(IDS)is a specialized security tool that network professionals use for the safety and security of the networks against attacks launched from various sources.DDoS attacks are becoming more frequent and powerful,and their attack pathways are continually changing,which requiring the development of new detection methods.Here the purpose of the study is to improve detection accuracy.Feature Selection(FS)is critical.At the same time,the IDS’s computational problem is limited by focusing on the most relevant elements,and its performance and accuracy increase.In this research work,the suggested Adaptive butterfly optimization algorithm(ABOA)framework is used to assess the effectiveness of a reduced feature subset during the feature selection phase,that was motivated by this motive Candidates.Accurate classification is not compromised by using an ABOA technique.The design of Deep Neural Networks(DNN)has simplified the categorization of network traffic into normal and DDoS threat traffic.DNN’s parameters can be finetuned to detect DDoS attacks better using specially built algorithms.Reduced reconstruction error,no exploding or vanishing gradients,and reduced network are all benefits of the changes outlined in this paper.When it comes to performance criteria like accuracy,precision,recall,and F1-Score are the performance measures that show the suggested architecture outperforms the other existing approaches.Hence the proposed ABOA+DNN is an excellent method for obtaining accurate predictions,with an improved accuracy rate of 99.05%compared to other existing approaches. 展开更多
关键词 Cloud computing distributed denial of service intrusion detection system adaptive butterfly optimization algorithm deep neural network
在线阅读 下载PDF
A low-complexity AMP detection algorithm with deep neural network for massive mimo systems
9
作者 Zufan Zhang Yang Li +1 位作者 Xiaoqin Yan Zonghua Ouyang 《Digital Communications and Networks》 CSCD 2024年第5期1375-1386,共12页
Signal detection plays an essential role in massive Multiple-Input Multiple-Output(MIMO)systems.However,existing detection methods have not yet made a good tradeoff between Bit Error Rate(BER)and computational complex... Signal detection plays an essential role in massive Multiple-Input Multiple-Output(MIMO)systems.However,existing detection methods have not yet made a good tradeoff between Bit Error Rate(BER)and computational complexity,resulting in slow convergence or high complexity.To address this issue,a low-complexity Approximate Message Passing(AMP)detection algorithm with Deep Neural Network(DNN)(denoted as AMP-DNN)is investigated in this paper.Firstly,an efficient AMP detection algorithm is derived by scalarizing the simplification of Belief Propagation(BP)algorithm.Secondly,by unfolding the obtained AMP detection algorithm,a DNN is specifically designed for the optimal performance gain.For the proposed AMP-DNN,the number of trainable parameters is only related to that of layers,regardless of modulation scheme,antenna number and matrix calculation,thus facilitating fast and stable training of the network.In addition,the AMP-DNN can detect different channels under the same distribution with only one training.The superior performance of the AMP-DNN is also verified by theoretical analysis and experiments.It is found that the proposed algorithm enables the reduction of BER without signal prior information,especially in the spatially correlated channel,and has a lower computational complexity compared with existing state-of-the-art methods. 展开更多
关键词 Massive MIMO system Approximate message passing(AMP)detection algorithm deep neural network(DNN) Bit error rate(BER) LOW-COMPLEXITY
在线阅读 下载PDF
Deep Reinforcement Learning-Based URLLC-Aware Task Offloading in Collaborative Vehicular Networks 被引量:5
10
作者 Chao Pan Zhao Wang +1 位作者 Zhenyu Zhou Xincheng Ren 《China Communications》 SCIE CSCD 2021年第7期134-146,共13页
Collaborative vehicular networks is a key enabler to meet the stringent ultra-reliable and lowlatency communications(URLLC)requirements.A user vehicle(UV)dynamically optimizes task offloading by exploiting its collabo... Collaborative vehicular networks is a key enabler to meet the stringent ultra-reliable and lowlatency communications(URLLC)requirements.A user vehicle(UV)dynamically optimizes task offloading by exploiting its collaborations with edge servers and vehicular fog servers(VFSs).However,the optimization of task offloading in highly dynamic collaborative vehicular networks faces several challenges such as URLLC guaranteeing,incomplete information,and dimensionality curse.In this paper,we first characterize URLLC in terms of queuing delay bound violation and high-order statistics of excess backlogs.Then,a Deep Reinforcement lEarning-based URLLCAware task offloading algorithM named DREAM is proposed to maximize the throughput of the UVs while satisfying the URLLC constraints in a besteffort way.Compared with existing task offloading algorithms,DREAM achieves superior performance in throughput,queuing delay,and URLLC. 展开更多
关键词 collaborative vehicular networks task of-floading URLLC awareness deep q-learning
在线阅读 下载PDF
Optimizing Deep Learning Parameters Using Genetic Algorithm for Object Recognition and Robot Grasping 被引量:2
11
作者 Delowar Hossain Genci Capi Mitsuru Jindai 《Journal of Electronic Science and Technology》 CAS CSCD 2018年第1期11-15,共5页
The performance of deep learning(DL)networks has been increased by elaborating the network structures. However, the DL netowrks have many parameters, which have a lot of influence on the performance of the network. We... The performance of deep learning(DL)networks has been increased by elaborating the network structures. However, the DL netowrks have many parameters, which have a lot of influence on the performance of the network. We propose a genetic algorithm(GA) based deep belief neural network(DBNN) method for robot object recognition and grasping purpose. This method optimizes the parameters of the DBNN method, such as the number of hidden units, the number of epochs, and the learning rates, which would reduce the error rate and the network training time of object recognition. After recognizing objects, the robot performs the pick-andplace operations. We build a database of six objects for experimental purpose. Experimental results demonstrate that our method outperforms on the optimized robot object recognition and grasping tasks. 展开更多
关键词 deep learning(DL) deep belief neural network(DBNN) genetic algorithm(GA) object recognition robot grasping
在线阅读 下载PDF
An intelligent task offloading algorithm(iTOA)for UAV edge computing network 被引量:8
12
作者 Jienan Chen Siyu Chen +3 位作者 Siyu Luo Qi Wang Bin Cao Xiaoqian Li 《Digital Communications and Networks》 SCIE 2020年第4期433-443,共11页
Unmanned Aerial Vehicle(UAV)has emerged as a promising technology for the support of human activities,such as target tracking,disaster rescue,and surveillance.However,these tasks require a large computation load of im... Unmanned Aerial Vehicle(UAV)has emerged as a promising technology for the support of human activities,such as target tracking,disaster rescue,and surveillance.However,these tasks require a large computation load of image or video processing,which imposes enormous pressure on the UAV computation platform.To solve this issue,in this work,we propose an intelligent Task Offloading Algorithm(iTOA)for UAV edge computing network.Compared with existing methods,iTOA is able to perceive the network’s environment intelligently to decide the offloading action based on deep Monte Calor Tree Search(MCTS),the core algorithm of Alpha Go.MCTS will simulate the offloading decision trajectories to acquire the best decision by maximizing the reward,such as lowest latency or power consumption.To accelerate the search convergence of MCTS,we also proposed a splitting Deep Neural Network(sDNN)to supply the prior probability for MCTS.The sDNN is trained by a self-supervised learning manager.Here,the training data set is obtained from iTOA itself as its own teacher.Compared with game theory and greedy search-based methods,the proposed iTOA improves service latency performance by 33%and 60%,respectively. 展开更多
关键词 Unmanned aerial vehicles(UAVs) Mobile edge computing(MEC) Intelligent task offloading algorithm(iTOA) Monte Carlo tree search(MCTS) deep reinforcement learning Splitting deep neural network(sDNN)
在线阅读 下载PDF
Optimal Deep Dense Convolutional Neural Network Based Classification Model for COVID-19 Disease 被引量:1
13
作者 A.Sheryl Oliver P.Suresh +2 位作者 A.Mohanarathinam Seifedine Kadry Orawit Thinnukool 《Computers, Materials & Continua》 SCIE EI 2022年第1期2031-2047,共17页
Early diagnosis and detection are important tasks in controlling the spread of COVID-19.A number of Deep Learning techniques has been established by researchers to detect the presence of COVID-19 using CT scan images ... Early diagnosis and detection are important tasks in controlling the spread of COVID-19.A number of Deep Learning techniques has been established by researchers to detect the presence of COVID-19 using CT scan images and X-rays.However,these methods suffer from biased results and inaccurate detection of the disease.So,the current research article developed Oppositional-based Chimp Optimization Algorithm and Deep Dense Convolutional Neural Network(OCOA-DDCNN)for COVID-19 prediction using CT images in IoT environment.The proposed methodology works on the basis of two stages such as pre-processing and prediction.Initially,CT scan images generated from prospective COVID-19 are collected from open-source system using IoT devices.The collected images are then preprocessed using Gaussian filter.Gaussian filter can be utilized in the removal of unwanted noise from the collected CT scan images.Afterwards,the preprocessed images are sent to prediction phase.In this phase,Deep Dense Convolutional Neural Network(DDCNN)is applied upon the pre-processed images.The proposed classifier is optimally designed with the consideration of Oppositional-basedChimp Optimization Algorithm(OCOA).This algorithm is utilized in the selection of optimal parameters for the proposed classifier.Finally,the proposed technique is used in the prediction of COVID-19 and classify the results as either COVID-19 or non-COVID-19.The projected method was implemented in MATLAB and the performances were evaluated through statistical measurements.The proposed method was contrasted with conventional techniques such as Convolutional Neural Network-Firefly Algorithm(CNN-FA),Emperor Penguin Optimization(CNN-EPO)respectively.The results established the supremacy of the proposed model. 展开更多
关键词 deep learning deep dense convolutional neural network covid-19 CT images chimp optimization algorithm
在线阅读 下载PDF
Improved Archimedes Optimization Algorithm with Deep Learning Empowered Fall Detection System 被引量:1
14
作者 Ala Saleh Alluhaidan Masoud Alajmi +3 位作者 Fahd N.Al-Wesabi Anwer Mustafa Hilal Manar Ahmed Hamza Abdelwahed Motwakel 《Computers, Materials & Continua》 SCIE EI 2022年第8期2713-2727,共15页
Human fall detection(FD)acts as an important part in creating sensor based alarm system,enabling physical therapists to minimize the effect of fall events and save human lives.Generally,elderly people suffer from seve... Human fall detection(FD)acts as an important part in creating sensor based alarm system,enabling physical therapists to minimize the effect of fall events and save human lives.Generally,elderly people suffer from several diseases,and fall action is a common situation which can occur at any time.In this view,this paper presents an Improved Archimedes Optimization Algorithm with Deep Learning Empowered Fall Detection(IAOA-DLFD)model to identify the fall/non-fall events.The proposed IAOA-DLFD technique comprises different levels of pre-processing to improve the input image quality.Besides,the IAOA with Capsule Network based feature extractor is derived to produce an optimal set of feature vectors.In addition,the IAOA uses to significantly boost the overall FD performance by optimal choice of CapsNet hyperparameters.Lastly,radial basis function(RBF)network is applied for determining the proper class labels of the test images.To showcase the enhanced performance of the IAOA-DLFD technique,a wide range of experiments are executed and the outcomes stated the enhanced detection outcome of the IAOA-DLFD approach over the recent methods with the accuracy of 0.997. 展开更多
关键词 Fall detection intelligent model deep learning archimedes optimization algorithm capsule network
在线阅读 下载PDF
Power System Resiliency and Wide Area Control Employing Deep Learning Algorithm 被引量:1
15
作者 Pandia Rajan Jeyaraj Aravind Chellachi Kathiresan +3 位作者 Siva Prakash Asokan Edward Rajan Samuel Nadar Hegazy Rezk Thanikanti Sudhakar Babu 《Computers, Materials & Continua》 SCIE EI 2021年第7期553-567,共15页
The power transfer capability of the smart transmission gridconnected networks needs to be reduced by inter-area oscillations.Due to the fact that inter-area modes of oscillations detain and make instability of power ... The power transfer capability of the smart transmission gridconnected networks needs to be reduced by inter-area oscillations.Due to the fact that inter-area modes of oscillations detain and make instability of power transmission networks.This fact is more noticeable in smart grid-connected systems.The smart grid infrastructure has more renewable energy resources installed for its operation.To overcome this problem,a deep learning widearea controller is proposed for real-time parameter control and smart power grid resilience on oscillations inter-area modes.The proposed Deep Wide Area Controller(DWAC)uses the Deep Belief Network(DBN).The network weights are updated based on real-time data from Phasor measurement units.Resilience assessment based on failure probability,financial impact,and time-series data in grid failure management determine the norm H2.To demonstrate the effectiveness of the proposed framework,a time-domain simulation case study based on the IEEE-39 bus system was performed.For a one-channel attack on the test system,the resiliency index increased to 0.962,and inter-area dampingξwas reduced to 0.005.The obtained results validate the proposed deep learning algorithm’s efficiency on damping inter-area and local oscillation on the 2-channel attack as well.Results also offer robust management of power system resilience and timely control of the operating conditions. 展开更多
关键词 Neural network deep learning algorithm low-frequency oscillation resiliency assessment smart grid wide-area control
在线阅读 下载PDF
Application of Improved Deep Auto-Encoder Network in Rolling Bearing Fault Diagnosis 被引量:1
16
作者 Jian Di Leilei Wang 《Journal of Computer and Communications》 2018年第7期41-53,共13页
Since the effectiveness of extracting fault features is not high under traditional bearing fault diagnosis method, a bearing fault diagnosis method based on Deep Auto-encoder Network (DAEN) optimized by Cloud Adaptive... Since the effectiveness of extracting fault features is not high under traditional bearing fault diagnosis method, a bearing fault diagnosis method based on Deep Auto-encoder Network (DAEN) optimized by Cloud Adaptive Particle Swarm Optimization (CAPSO) was proposed. On the basis of analyzing CAPSO and DAEN, the CAPSO-DAEN fault diagnosis model is built. The model uses the randomness and stability of CAPSO algorithm to optimize the connection weight of DAEN, to reduce the constraints on the weights and extract fault features adaptively. Finally, efficient and accurate fault diagnosis can be implemented with the Softmax classifier. The results of test show that the proposed method has higher diagnostic accuracy and more stable diagnosis results than those based on the DAEN, Support Vector Machine (SVM) and the Back Propagation algorithm (BP) under appropriate parameters. 展开更多
关键词 Fault Diagnosis ROLLING BEARING deep Auto-Encoder network CAPSO algorithm Feature Extraction
暂未订购
Surface wave inversion with unknown number of soil layers based on a hybrid learning procedure of deep learning and genetic algorithm
17
作者 Zan Zhou Thomas Man-Hoi Lok Wan-Huan Zhou 《Earthquake Engineering and Engineering Vibration》 SCIE EI CSCD 2024年第2期345-358,共14页
Surface wave inversion is a key step in the application of surface waves to soil velocity profiling.Currently,a common practice for the process of inversion is that the number of soil layers is assumed to be known bef... Surface wave inversion is a key step in the application of surface waves to soil velocity profiling.Currently,a common practice for the process of inversion is that the number of soil layers is assumed to be known before using heuristic search algorithms to compute the shear wave velocity profile or the number of soil layers is considered as an optimization variable.However,an improper selection of the number of layers may lead to an incorrect shear wave velocity profile.In this study,a deep learning and genetic algorithm hybrid learning procedure is proposed to perform the surface wave inversion without the need to assume the number of soil layers.First,a deep neural network is adapted to learn from a large number of synthetic dispersion curves for inferring the layer number.Then,the shear-wave velocity profile is determined by a genetic algorithm with the known layer number.By applying this procedure to both simulated and real-world cases,the results indicate that the proposed method is reliable and efficient for surface wave inversion. 展开更多
关键词 surface wave inversion analysis shear-wave velocity profile deep neural network genetic algorithm
在线阅读 下载PDF
Hybrid Deep Learning-Improved BAT Optimization Algorithm for Soil Classification Using Hyperspectral Features
18
作者 S.Prasanna Bharathi S.Srinivasan +1 位作者 G.Chamundeeswari B.Ramesh 《Computer Systems Science & Engineering》 SCIE EI 2023年第4期579-594,共16页
Now a days,Remote Sensing(RS)techniques are used for earth observation and for detection of soil types with high accuracy and better reliability.This technique provides perspective view of spatial resolution and aids ... Now a days,Remote Sensing(RS)techniques are used for earth observation and for detection of soil types with high accuracy and better reliability.This technique provides perspective view of spatial resolution and aids in instantaneous measurement of soil’s minerals and its characteristics.There are a few challenges that is present in soil classification using image enhancement such as,locating and plotting soil boundaries,slopes,hazardous areas,drainage condition,land use,vegetation etc.There are some traditional approaches which involves few drawbacks such as,manual involvement which results in inaccuracy due to human interference,time consuming,inconsistent prediction etc.To overcome these draw backs and to improve the predictive analysis of soil characteristics,we propose a Hybrid Deep Learning improved BAT optimization algorithm(HDIB)for soil classification using remote sensing hyperspectral features.In HDIB,we propose a spontaneous BAT optimization algorithm for feature extraction of both spectral-spatial features by choosing pure pixels from the Hyper Spectral(HS)image.Spectral-spatial vector as training illustrations is attained by merging spatial and spectral vector by means of priority stacking methodology.Then,a recurring Deep Learning(DL)Neural Network(NN)is used for classifying the HS images,considering the datasets of Pavia University,Salinas and Tamil Nadu Hill Scene,which in turn improves the reliability of classification.Finally,the performance of the proposed HDIB based soil classifier is compared and analyzed with existing methodologies like Single Layer Perceptron(SLP),Convolutional Neural Networks(CNN)and Deep Metric Learning(DML)and it shows an improved classification accuracy of 99.87%,98.34%and 99.9%for Tamil Nadu Hills dataset,Pavia University and Salinas scene datasets respectively. 展开更多
关键词 HDIB bat optimization algorithm recurrent deep learning neural network convolutional neural network single layer perceptron hyperspectral images deep metric learning
在线阅读 下载PDF
Deep Capsule Residual Networks for Better Diagnosis Rate in Medical Noisy Images
19
作者 P.S.Arthy A.Kavitha 《Intelligent Automation & Soft Computing》 SCIE 2023年第5期1381-1393,共13页
With the advent of Machine and Deep Learning algorithms,medical image diagnosis has a new perception of diagnosis and clinical treatment.Regret-tably,medical images are more susceptible to capturing noises despite the... With the advent of Machine and Deep Learning algorithms,medical image diagnosis has a new perception of diagnosis and clinical treatment.Regret-tably,medical images are more susceptible to capturing noises despite the peak in intelligent imaging techniques.However,the presence of noise images degrades both the diagnosis and clinical treatment processes.The existing intelligent meth-ods suffer from the deficiency in handling the diverse range of noise in the ver-satile medical images.This paper proposes a novel deep learning network which learns from the substantial extent of noise in medical data samples to alle-viate this challenge.The proposed deep learning architecture exploits the advan-tages of the capsule network,which is used to extract correlation features and combine them with redefined residual features.Additionally,thefinal stage of dense learning is replaced with powerful extreme learning machines to achieve a better diagnosis rate,even for noisy and complex images.Extensive experimen-tation has been conducted using different medical images.Various performances such as Peak-Signal-To-Noise Ratio(PSNR)and Structural-Similarity-Index-Metrics(SSIM)are compared with the existing deep learning architectures.Addi-tionally,a comprehensive analysis of individual algorithms is analyzed.The experimental results prove that the proposed model has outperformed the other existing algorithms by a substantial margin and proved its supremacy over the other learning models. 展开更多
关键词 Machine and deep learning algorithm capsule networks residual networks extreme learning machines correlation features
在线阅读 下载PDF
Deep Capsule Residual Networks for Better Diagnosis Rate in Medical Noisy Images
20
作者 P.S.Arthy A.Kavitha 《Intelligent Automation & Soft Computing》 SCIE 2023年第6期2959-2971,共13页
With the advent of Machine and Deep Learning algorithms,medical image diagnosis has a new perception of diagnosis and clinical treatment.Regret-tably,medical images are more susceptible to capturing noises despite the... With the advent of Machine and Deep Learning algorithms,medical image diagnosis has a new perception of diagnosis and clinical treatment.Regret-tably,medical images are more susceptible to capturing noises despite the peak in intelligent imaging techniques.However,the presence of noise images degrades both the diagnosis and clinical treatment processes.The existing intelligent meth-ods suffer from the deficiency in handling the diverse range of noise in the ver-satile medical images.This paper proposes a novel deep learning network which learns from the substantial extent of noise in medical data samples to alle-viate this challenge.The proposed deep learning architecture exploits the advan-tages of the capsule network,which is used to extract correlation features and combine them with redefined residual features.Additionally,the final stage of dense learning is replaced with powerful extreme learning machines to achieve a better diagnosis rate,even for noisy and complex images.Extensive experimen-tation has been conducted using different medical images.Various performances such as Peak-Signal-To-Noise Ratio(PSNR)and Structural-Similarity-Index-Metrics(SSIM)are compared with the existing deep learning architectures.Addi-tionally,a comprehensive analysis of individual algorithms is analyzed.The experimental results prove that the proposed model has outperformed the other existing algorithms by a substantial margin and proved its supremacy over the other learning models. 展开更多
关键词 Machine and deep learning algorithm capsule networks residual networks extreme learning machines correlation features
在线阅读 下载PDF
上一页 1 2 61 下一页 到第
使用帮助 返回顶部