With the increasing importance of supply chain transparency,blockchain-based data has emerged as a valuable and verifiable source for analyzing procurement transaction risks.This study extends the mathematical model a...With the increasing importance of supply chain transparency,blockchain-based data has emerged as a valuable and verifiable source for analyzing procurement transaction risks.This study extends the mathematical model and proof of‘the Overall Performance Characteristics of the Supply Chain’to encompass multiple variables within blockchain data.Utilizing graph theory,the model is further developed into a single-layer neural network,which serves as the foundation for constructing two multi-layer deep learning neural network models,Feedforward Neural Network(abbreviated as FNN)and Deep Clustering Network(abbreviated as DCN).Furthermore,this study retrieves corporate data from the Chunghwa Yellow Pages online resource and Taiwan Economic Journal database(abbreviated as TEJ).These data are then virtualized using‘the Metaverse Algorithm’,and the selected virtualized blockchain variables are utilized to train a neural network model for classification.The results demonstrate that a single-layer neural network model,leveraging blockchain data and employing the Proof of Relation algorithm(abbreviated as PoR)as the activation function,effectively identifies anomalous enterprises,which constitute 7.2%of the total sample,aligning with expectations.In contrast,the multi-layer neural network models,DCN and FNN,classify an excessively large proportion of enterprises as anomalous(ranging from one-fourth to one-third),which deviates from expectations.This indicates that deep learning may still be inadequate in effectively capturing or identifying malicious corporate behaviors associated with distortions in procurement transaction data.In other words,procurement transaction blockchain data possesses intrinsic value that cannot be replaced by artificial intelligence(abbreviated as AI).展开更多
The controller is a main component in the Software-Defined Networking(SDN)framework,which plays a significant role in enabling programmability and orchestration for 5G and next-generation networks.In SDN,frequent comm...The controller is a main component in the Software-Defined Networking(SDN)framework,which plays a significant role in enabling programmability and orchestration for 5G and next-generation networks.In SDN,frequent communication occurs between network switches and the controller,which manages and directs traffic flows.If the controller is not strategically placed within the network,this communication can experience increased delays,negatively affecting network performance.Specifically,an improperly placed controller can lead to higher end-to-end(E2E)delay,as switches must traverse more hops or encounter greater propagation delays when communicating with the controller.This paper introduces a novel approach using Deep Q-Learning(DQL)to dynamically place controllers in Software-Defined Internet of Things(SD-IoT)environments,with the goal of minimizing E2E delay between switches and controllers.E2E delay,a crucial metric for network performance,is influenced by two key factors:hop count,which measures the number of network nodes data must traverse,and propagation delay,which accounts for the physical distance between nodes.Our approach models the controller placement problem as a Markov Decision Process(MDP).In this model,the network configuration at any given time is represented as a“state,”while“actions”correspond to potential decisions regarding the placement of controllers or the reassignment of switches to controllers.Using a Deep Q-Network(DQN)to approximate the Q-function,the system learns the optimal controller placement by maximizing the cumulative reward,which is defined as the negative of the E2E delay.Essentially,the lower the delay,the higher the reward the system receives,enabling it to continuously improve its controller placement strategy.The experimental results show that our DQL-based method significantly reduces E2E delay when compared to traditional benchmark placement strategies.By dynamically learning from the network’s real-time conditions,the proposed method ensures that controller placement remains efficient and responsive,reducing communication delays and enhancing overall network performance.展开更多
Signal detection plays an essential role in massive Multiple-Input Multiple-Output(MIMO)systems.However,existing detection methods have not yet made a good tradeoff between Bit Error Rate(BER)and computational complex...Signal detection plays an essential role in massive Multiple-Input Multiple-Output(MIMO)systems.However,existing detection methods have not yet made a good tradeoff between Bit Error Rate(BER)and computational complexity,resulting in slow convergence or high complexity.To address this issue,a low-complexity Approximate Message Passing(AMP)detection algorithm with Deep Neural Network(DNN)(denoted as AMP-DNN)is investigated in this paper.Firstly,an efficient AMP detection algorithm is derived by scalarizing the simplification of Belief Propagation(BP)algorithm.Secondly,by unfolding the obtained AMP detection algorithm,a DNN is specifically designed for the optimal performance gain.For the proposed AMP-DNN,the number of trainable parameters is only related to that of layers,regardless of modulation scheme,antenna number and matrix calculation,thus facilitating fast and stable training of the network.In addition,the AMP-DNN can detect different channels under the same distribution with only one training.The superior performance of the AMP-DNN is also verified by theoretical analysis and experiments.It is found that the proposed algorithm enables the reduction of BER without signal prior information,especially in the spatially correlated channel,and has a lower computational complexity compared with existing state-of-the-art methods.展开更多
To support dramatically increased traffic loads,communication networks become ultra-dense.Traditional cell association(CA)schemes are timeconsuming,forcing researchers to seek fast schemes.This paper proposes a deep Q...To support dramatically increased traffic loads,communication networks become ultra-dense.Traditional cell association(CA)schemes are timeconsuming,forcing researchers to seek fast schemes.This paper proposes a deep Q-learning based scheme,whose main idea is to train a deep neural network(DNN)to calculate the Q values of all the state-action pairs and the cell holding the maximum Q value is associated.In the training stage,the intelligent agent continuously generates samples through the trial-anderror method to train the DNN until convergence.In the application stage,state vectors of all the users are inputted to the trained DNN to quickly obtain a satisfied CA result of a scenario with the same BS locations and user distribution.Simulations demonstrate that the proposed scheme provides satisfied CA results in a computational time several orders of magnitudes shorter than traditional schemes.Meanwhile,performance metrics,such as capacity and fairness,can be guaranteed.展开更多
Deep Reinforcement Learning(DRL)is a class of Machine Learning(ML)that combines Deep Learning with Reinforcement Learning and provides a framework by which a system can learn from its previous actions in an environmen...Deep Reinforcement Learning(DRL)is a class of Machine Learning(ML)that combines Deep Learning with Reinforcement Learning and provides a framework by which a system can learn from its previous actions in an environment to select its efforts in the future efficiently.DRL has been used in many application fields,including games,robots,networks,etc.for creating autonomous systems that improve themselves with experience.It is well acknowledged that DRL is well suited to solve optimization problems in distributed systems in general and network routing especially.Therefore,a novel query routing approach called Deep Reinforcement Learning based Route Selection(DRLRS)is proposed for unstructured P2P networks based on a Deep Q-Learning algorithm.The main objective of this approach is to achieve better retrieval effectiveness with reduced searching cost by less number of connected peers,exchangedmessages,and reduced time.The simulation results shows a significantly improve searching a resource with compression to k-Random Walker and Directed BFS.Here,retrieval effectiveness,search cost in terms of connected peers,and average overhead are 1.28,106,149,respectively.展开更多
Accurate estimation of biomass is necessary for evaluating crop growth and predicting crop yield.Biomass is also a key trait in increasing grain yield by crop breeding.The aims of this study were(i)to identify the bes...Accurate estimation of biomass is necessary for evaluating crop growth and predicting crop yield.Biomass is also a key trait in increasing grain yield by crop breeding.The aims of this study were(i)to identify the best vegetation indices for estimating maize biomass,(ii)to investigate the relationship between biomass and leaf area index(LAI)at several growth stages,and(iii)to evaluate a biomass model using measured vegetation indices or simulated vegetation indices of Sentinel 2A and LAI using a deep neural network(DNN)algorithm.The results showed that biomass was associated with all vegetation indices.The three-band water index(TBWI)was the best vegetation index for estimating biomass and the corresponding R2,RMSE,and RRMSE were 0.76,2.84 t ha−1,and 38.22%respectively.LAI was highly correlated with biomass(R2=0.89,RMSE=2.27 t ha−1,and RRMSE=30.55%).Estimated biomass based on 15 hyperspectral vegetation indices was in a high agreement with measured biomass using the DNN algorithm(R2=0.83,RMSE=1.96 t ha−1,and RRMSE=26.43%).Biomass estimation accuracy was further increased when LAI was combined with the 15 vegetation indices(R2=0.91,RMSE=1.49 t ha−1,and RRMSE=20.05%).Relationships between the hyperspectral vegetation indices and biomass differed from relationships between simulated Sentinel 2A vegetation indices and biomass.Biomass estimation from the hyperspectral vegetation indices was more accurate than that from the simulated Sentinel 2A vegetation indices(R2=0.87,RMSE=1.84 t ha−1,and RRMSE=24.76%).The DNN algorithm was effective in improving the estimation accuracy of biomass.It provides a guideline for estimating biomass of maize using remote sensing technology and the DNN algorithm in this region.展开更多
Surface wave inversion is a key step in the application of surface waves to soil velocity profiling.Currently,a common practice for the process of inversion is that the number of soil layers is assumed to be known bef...Surface wave inversion is a key step in the application of surface waves to soil velocity profiling.Currently,a common practice for the process of inversion is that the number of soil layers is assumed to be known before using heuristic search algorithms to compute the shear wave velocity profile or the number of soil layers is considered as an optimization variable.However,an improper selection of the number of layers may lead to an incorrect shear wave velocity profile.In this study,a deep learning and genetic algorithm hybrid learning procedure is proposed to perform the surface wave inversion without the need to assume the number of soil layers.First,a deep neural network is adapted to learn from a large number of synthetic dispersion curves for inferring the layer number.Then,the shear-wave velocity profile is determined by a genetic algorithm with the known layer number.By applying this procedure to both simulated and real-world cases,the results indicate that the proposed method is reliable and efficient for surface wave inversion.展开更多
At present,the prediction of brain tumors is performed using Machine Learning(ML)and Deep Learning(DL)algorithms.Although various ML and DL algorithms are adapted to predict brain tumors to some range,some concerns st...At present,the prediction of brain tumors is performed using Machine Learning(ML)and Deep Learning(DL)algorithms.Although various ML and DL algorithms are adapted to predict brain tumors to some range,some concerns still need enhancement,particularly accuracy,sensitivity,false positive and false negative,to improve the brain tumor prediction system symmetrically.Therefore,this work proposed an Extended Deep Learning Algorithm(EDLA)to measure performance parameters such as accuracy,sensitivity,and false positive and false negative rates.In addition,these iterated measures were analyzed by comparing the EDLA method with the Convolutional Neural Network(CNN)way further using the SPSS tool,and respective graphical illustrations were shown.The results were that the mean performance measures for the proposed EDLA algorithm were calculated,and those measured were accuracy(97.665%),sensitivity(97.939%),false positive(3.012%),and false negative(3.182%)for ten iterations.Whereas in the case of the CNN,the algorithm means accuracy gained was 94.287%,mean sensitivity 95.612%,mean false positive 5.328%,and mean false negative 4.756%.These results show that the proposed EDLA method has outperformed existing algorithms,including CNN,and ensures symmetrically improved parameters.Thus EDLA algorithm introduces novelty concerning its performance and particular activation function.This proposed method will be utilized effectively in brain tumor detection in a precise and accurate manner.This algorithm would apply to brain tumor diagnosis and be involved in various medical diagnoses aftermodification.If the quantity of dataset records is enormous,then themethod’s computation power has to be updated.展开更多
With the development of science, economy and society, the needs for research and exploration of deep space have entered a rapid and stable development stage. Deep Space Optical Network(DSON) is expected to become an i...With the development of science, economy and society, the needs for research and exploration of deep space have entered a rapid and stable development stage. Deep Space Optical Network(DSON) is expected to become an important foundation and inevitable development trend of future deepspace communication. In this paper, we design a deep space node model which is capable of combining the space division multiplexing with frequency division multiplexing. Furthermore, we propose the directional flooding routing algorithm(DFRA) for DSON based on our node model. This scheme selectively forwards the data packets in the routing, so that the energy consumption can be reduced effectively because only a portion of nodes will participate the flooding routing. Simulation results show that, compared with traditional flooding routing algorithm(TFRA), the DFRA can avoid the non-directional and blind transmission. Therefore, the energy consumption in message routing will be reduced and the lifespan of DSON can also be prolonged effectively. Although the complexity of routing implementation is slightly increased compared with TFRA, the energy of nodes can be saved and the transmission rate is obviously improved in DFRA. Thus the overall performance of DSON can be significantly improved.展开更多
As a typical physical retrieval algorithm for retrieving atmospheric parameters,one-dimensional variational(1 DVAR)algorithm is widely used in various climate and meteorological communities and enjoys an important pos...As a typical physical retrieval algorithm for retrieving atmospheric parameters,one-dimensional variational(1 DVAR)algorithm is widely used in various climate and meteorological communities and enjoys an important position in the field of microwave remote sensing.Among algorithm parameters affecting the performance of the 1 DVAR algorithm,the accuracy of the microwave radiative transfer model for calculating the simulated brightness temperature is the fundamental constraint on the retrieval accuracies of the 1 DVAR algorithm for retrieving atmospheric parameters.In this study,a deep neural network(DNN)is used to describe the nonlinear relationship between atmospheric parameters and satellite-based microwave radiometer observations,and a DNN-based radiative transfer model is developed and applied to the 1 DVAR algorithm to carry out retrieval experiments of the atmospheric temperature and humidity profiles.The retrieval results of the temperature and humidity profiles from the Microwave Humidity and Temperature Sounder(MWHTS)onboard the Feng-Yun-3(FY-3)satellite show that the DNN-based radiative transfer model can obtain higher accuracy for simulating MWHTS observations than that of the operational radiative transfer model RTTOV,and also enables the 1 DVAR algorithm to obtain higher retrieval accuracies of the temperature and humidity profiles.In this study,the DNN-based radiative transfer model applied to the 1 DVAR algorithm can fundamentally improve the retrieval accuracies of atmospheric parameters,which may provide important reference for various applied studies in atmospheric sciences.展开更多
Cloud computing technology provides flexible,on-demand,and completely controlled computing resources and services are highly desirable.Despite this,with its distributed and dynamic nature and shortcomings in virtualiz...Cloud computing technology provides flexible,on-demand,and completely controlled computing resources and services are highly desirable.Despite this,with its distributed and dynamic nature and shortcomings in virtualization deployment,the cloud environment is exposed to a wide variety of cyber-attacks and security difficulties.The Intrusion Detection System(IDS)is a specialized security tool that network professionals use for the safety and security of the networks against attacks launched from various sources.DDoS attacks are becoming more frequent and powerful,and their attack pathways are continually changing,which requiring the development of new detection methods.Here the purpose of the study is to improve detection accuracy.Feature Selection(FS)is critical.At the same time,the IDS’s computational problem is limited by focusing on the most relevant elements,and its performance and accuracy increase.In this research work,the suggested Adaptive butterfly optimization algorithm(ABOA)framework is used to assess the effectiveness of a reduced feature subset during the feature selection phase,that was motivated by this motive Candidates.Accurate classification is not compromised by using an ABOA technique.The design of Deep Neural Networks(DNN)has simplified the categorization of network traffic into normal and DDoS threat traffic.DNN’s parameters can be finetuned to detect DDoS attacks better using specially built algorithms.Reduced reconstruction error,no exploding or vanishing gradients,and reduced network are all benefits of the changes outlined in this paper.When it comes to performance criteria like accuracy,precision,recall,and F1-Score are the performance measures that show the suggested architecture outperforms the other existing approaches.Hence the proposed ABOA+DNN is an excellent method for obtaining accurate predictions,with an improved accuracy rate of 99.05%compared to other existing approaches.展开更多
Developing an accurate and efficient comprehensive water quality prediction model and its assessment method is crucial for the prevention and control of water pollution.Deep learning(DL),as one of the most promising t...Developing an accurate and efficient comprehensive water quality prediction model and its assessment method is crucial for the prevention and control of water pollution.Deep learning(DL),as one of the most promising technologies today,plays a crucial role in the effective assessment of water body health,which is essential for water resource management.This study models using both the original dataset and a dataset augmented with Generative Adversarial Networks(GAN).It integrates optimization algorithms(OA)with Convolutional Neural Networks(CNN)to propose a comprehensive water quality model evaluation method aiming at identifying the optimal models for different pollutants.Specifically,after preprocessing the spectral dataset,data augmentation was conducted to obtain two datasets.Then,six new models were developed on these datasets using particle swarm optimization(PSO),genetic algorithm(GA),and simulated annealing(SA)combined with CNN to simulate and forecast the concentrations of three water pollutants:Chemical Oxygen Demand(COD),Total Nitrogen(TN),and Total Phosphorus(TP).Finally,seven model evaluation methods,including uncertainty analysis,were used to evaluate the constructed models and select the optimal models for the three pollutants.The evaluation results indicate that the GPSCNN model performed best in predicting COD and TP concentrations,while the GGACNN model excelled in TN concentration prediction.Compared to existing technologies,the proposed models and evaluation methods provide a more comprehensive and rapid approach to water body prediction and assessment,offering new insights and methods for water pollution prevention and control.展开更多
In this paper,Isogeometric analysis(IGA)is effectively integrated with machine learning(ML)to investigate the bearing capacity of strip footings in layered soil profiles,with a focus on a sand-over-clay configuration....In this paper,Isogeometric analysis(IGA)is effectively integrated with machine learning(ML)to investigate the bearing capacity of strip footings in layered soil profiles,with a focus on a sand-over-clay configuration.The study begins with the generation of a comprehensive dataset of 10,000 samples from IGA upper bound(UB)limit analyses,facilitating an in-depth examination of various material and geometric conditions.A hybrid deep neural network,specifically the Whale Optimization Algorithm-Deep Neural Network(WOA-DNN),is then employed to utilize these 10,000 outputs for precise bearing capacity predictions.Notably,the WOA-DNN model outperforms conventional ML techniques,offering a robust and accurate prediction tool.This innovative approach explores a broad range of design parameters,including sand layer depth,load-to-soil unit weight ratio,internal friction angle,cohesion,and footing roughness.A detailed analysis of the dataset reveals the significant influence of these parameters on bearing capacity,providing valuable insights for practical foundation design.This research demonstrates the usefulness of data-driven techniques in optimizing the design of shallow foundations within layered soil profiles,marking a significant stride in geotechnical engineering advancements.展开更多
Collaborative vehicular networks is a key enabler to meet the stringent ultra-reliable and lowlatency communications(URLLC)requirements.A user vehicle(UV)dynamically optimizes task offloading by exploiting its collabo...Collaborative vehicular networks is a key enabler to meet the stringent ultra-reliable and lowlatency communications(URLLC)requirements.A user vehicle(UV)dynamically optimizes task offloading by exploiting its collaborations with edge servers and vehicular fog servers(VFSs).However,the optimization of task offloading in highly dynamic collaborative vehicular networks faces several challenges such as URLLC guaranteeing,incomplete information,and dimensionality curse.In this paper,we first characterize URLLC in terms of queuing delay bound violation and high-order statistics of excess backlogs.Then,a Deep Reinforcement lEarning-based URLLCAware task offloading algorithM named DREAM is proposed to maximize the throughput of the UVs while satisfying the URLLC constraints in a besteffort way.Compared with existing task offloading algorithms,DREAM achieves superior performance in throughput,queuing delay,and URLLC.展开更多
The performance of deep learning(DL)networks has been increased by elaborating the network structures. However, the DL netowrks have many parameters, which have a lot of influence on the performance of the network. We...The performance of deep learning(DL)networks has been increased by elaborating the network structures. However, the DL netowrks have many parameters, which have a lot of influence on the performance of the network. We propose a genetic algorithm(GA) based deep belief neural network(DBNN) method for robot object recognition and grasping purpose. This method optimizes the parameters of the DBNN method, such as the number of hidden units, the number of epochs, and the learning rates, which would reduce the error rate and the network training time of object recognition. After recognizing objects, the robot performs the pick-andplace operations. We build a database of six objects for experimental purpose. Experimental results demonstrate that our method outperforms on the optimized robot object recognition and grasping tasks.展开更多
Unmanned Aerial Vehicle(UAV)has emerged as a promising technology for the support of human activities,such as target tracking,disaster rescue,and surveillance.However,these tasks require a large computation load of im...Unmanned Aerial Vehicle(UAV)has emerged as a promising technology for the support of human activities,such as target tracking,disaster rescue,and surveillance.However,these tasks require a large computation load of image or video processing,which imposes enormous pressure on the UAV computation platform.To solve this issue,in this work,we propose an intelligent Task Offloading Algorithm(iTOA)for UAV edge computing network.Compared with existing methods,iTOA is able to perceive the network’s environment intelligently to decide the offloading action based on deep Monte Calor Tree Search(MCTS),the core algorithm of Alpha Go.MCTS will simulate the offloading decision trajectories to acquire the best decision by maximizing the reward,such as lowest latency or power consumption.To accelerate the search convergence of MCTS,we also proposed a splitting Deep Neural Network(sDNN)to supply the prior probability for MCTS.The sDNN is trained by a self-supervised learning manager.Here,the training data set is obtained from iTOA itself as its own teacher.Compared with game theory and greedy search-based methods,the proposed iTOA improves service latency performance by 33%and 60%,respectively.展开更多
Early diagnosis and detection are important tasks in controlling the spread of COVID-19.A number of Deep Learning techniques has been established by researchers to detect the presence of COVID-19 using CT scan images ...Early diagnosis and detection are important tasks in controlling the spread of COVID-19.A number of Deep Learning techniques has been established by researchers to detect the presence of COVID-19 using CT scan images and X-rays.However,these methods suffer from biased results and inaccurate detection of the disease.So,the current research article developed Oppositional-based Chimp Optimization Algorithm and Deep Dense Convolutional Neural Network(OCOA-DDCNN)for COVID-19 prediction using CT images in IoT environment.The proposed methodology works on the basis of two stages such as pre-processing and prediction.Initially,CT scan images generated from prospective COVID-19 are collected from open-source system using IoT devices.The collected images are then preprocessed using Gaussian filter.Gaussian filter can be utilized in the removal of unwanted noise from the collected CT scan images.Afterwards,the preprocessed images are sent to prediction phase.In this phase,Deep Dense Convolutional Neural Network(DDCNN)is applied upon the pre-processed images.The proposed classifier is optimally designed with the consideration of Oppositional-basedChimp Optimization Algorithm(OCOA).This algorithm is utilized in the selection of optimal parameters for the proposed classifier.Finally,the proposed technique is used in the prediction of COVID-19 and classify the results as either COVID-19 or non-COVID-19.The projected method was implemented in MATLAB and the performances were evaluated through statistical measurements.The proposed method was contrasted with conventional techniques such as Convolutional Neural Network-Firefly Algorithm(CNN-FA),Emperor Penguin Optimization(CNN-EPO)respectively.The results established the supremacy of the proposed model.展开更多
Human fall detection(FD)acts as an important part in creating sensor based alarm system,enabling physical therapists to minimize the effect of fall events and save human lives.Generally,elderly people suffer from seve...Human fall detection(FD)acts as an important part in creating sensor based alarm system,enabling physical therapists to minimize the effect of fall events and save human lives.Generally,elderly people suffer from several diseases,and fall action is a common situation which can occur at any time.In this view,this paper presents an Improved Archimedes Optimization Algorithm with Deep Learning Empowered Fall Detection(IAOA-DLFD)model to identify the fall/non-fall events.The proposed IAOA-DLFD technique comprises different levels of pre-processing to improve the input image quality.Besides,the IAOA with Capsule Network based feature extractor is derived to produce an optimal set of feature vectors.In addition,the IAOA uses to significantly boost the overall FD performance by optimal choice of CapsNet hyperparameters.Lastly,radial basis function(RBF)network is applied for determining the proper class labels of the test images.To showcase the enhanced performance of the IAOA-DLFD technique,a wide range of experiments are executed and the outcomes stated the enhanced detection outcome of the IAOA-DLFD approach over the recent methods with the accuracy of 0.997.展开更多
The power transfer capability of the smart transmission gridconnected networks needs to be reduced by inter-area oscillations.Due to the fact that inter-area modes of oscillations detain and make instability of power ...The power transfer capability of the smart transmission gridconnected networks needs to be reduced by inter-area oscillations.Due to the fact that inter-area modes of oscillations detain and make instability of power transmission networks.This fact is more noticeable in smart grid-connected systems.The smart grid infrastructure has more renewable energy resources installed for its operation.To overcome this problem,a deep learning widearea controller is proposed for real-time parameter control and smart power grid resilience on oscillations inter-area modes.The proposed Deep Wide Area Controller(DWAC)uses the Deep Belief Network(DBN).The network weights are updated based on real-time data from Phasor measurement units.Resilience assessment based on failure probability,financial impact,and time-series data in grid failure management determine the norm H2.To demonstrate the effectiveness of the proposed framework,a time-domain simulation case study based on the IEEE-39 bus system was performed.For a one-channel attack on the test system,the resiliency index increased to 0.962,and inter-area dampingξwas reduced to 0.005.The obtained results validate the proposed deep learning algorithm’s efficiency on damping inter-area and local oscillation on the 2-channel attack as well.Results also offer robust management of power system resilience and timely control of the operating conditions.展开更多
文摘With the increasing importance of supply chain transparency,blockchain-based data has emerged as a valuable and verifiable source for analyzing procurement transaction risks.This study extends the mathematical model and proof of‘the Overall Performance Characteristics of the Supply Chain’to encompass multiple variables within blockchain data.Utilizing graph theory,the model is further developed into a single-layer neural network,which serves as the foundation for constructing two multi-layer deep learning neural network models,Feedforward Neural Network(abbreviated as FNN)and Deep Clustering Network(abbreviated as DCN).Furthermore,this study retrieves corporate data from the Chunghwa Yellow Pages online resource and Taiwan Economic Journal database(abbreviated as TEJ).These data are then virtualized using‘the Metaverse Algorithm’,and the selected virtualized blockchain variables are utilized to train a neural network model for classification.The results demonstrate that a single-layer neural network model,leveraging blockchain data and employing the Proof of Relation algorithm(abbreviated as PoR)as the activation function,effectively identifies anomalous enterprises,which constitute 7.2%of the total sample,aligning with expectations.In contrast,the multi-layer neural network models,DCN and FNN,classify an excessively large proportion of enterprises as anomalous(ranging from one-fourth to one-third),which deviates from expectations.This indicates that deep learning may still be inadequate in effectively capturing or identifying malicious corporate behaviors associated with distortions in procurement transaction data.In other words,procurement transaction blockchain data possesses intrinsic value that cannot be replaced by artificial intelligence(abbreviated as AI).
基金supported by the Researcher Supporting Project number(RSPD2024R582),King Saud University,Riyadh,Saudi Arabia.
文摘The controller is a main component in the Software-Defined Networking(SDN)framework,which plays a significant role in enabling programmability and orchestration for 5G and next-generation networks.In SDN,frequent communication occurs between network switches and the controller,which manages and directs traffic flows.If the controller is not strategically placed within the network,this communication can experience increased delays,negatively affecting network performance.Specifically,an improperly placed controller can lead to higher end-to-end(E2E)delay,as switches must traverse more hops or encounter greater propagation delays when communicating with the controller.This paper introduces a novel approach using Deep Q-Learning(DQL)to dynamically place controllers in Software-Defined Internet of Things(SD-IoT)environments,with the goal of minimizing E2E delay between switches and controllers.E2E delay,a crucial metric for network performance,is influenced by two key factors:hop count,which measures the number of network nodes data must traverse,and propagation delay,which accounts for the physical distance between nodes.Our approach models the controller placement problem as a Markov Decision Process(MDP).In this model,the network configuration at any given time is represented as a“state,”while“actions”correspond to potential decisions regarding the placement of controllers or the reassignment of switches to controllers.Using a Deep Q-Network(DQN)to approximate the Q-function,the system learns the optimal controller placement by maximizing the cumulative reward,which is defined as the negative of the E2E delay.Essentially,the lower the delay,the higher the reward the system receives,enabling it to continuously improve its controller placement strategy.The experimental results show that our DQL-based method significantly reduces E2E delay when compared to traditional benchmark placement strategies.By dynamically learning from the network’s real-time conditions,the proposed method ensures that controller placement remains efficient and responsive,reducing communication delays and enhancing overall network performance.
基金supported by Major Project of Science and Technology Research Program of Chongqing Education Commission of China(Grant No.KJZD-M201900601)China Postdoctoral Science Foundation(Grant No.2021MD703932)Project Supported by Engineering Research Center of Mobile Communications,Ministry of Education,China(Grant No.cqupt-mct-202006)。
文摘Signal detection plays an essential role in massive Multiple-Input Multiple-Output(MIMO)systems.However,existing detection methods have not yet made a good tradeoff between Bit Error Rate(BER)and computational complexity,resulting in slow convergence or high complexity.To address this issue,a low-complexity Approximate Message Passing(AMP)detection algorithm with Deep Neural Network(DNN)(denoted as AMP-DNN)is investigated in this paper.Firstly,an efficient AMP detection algorithm is derived by scalarizing the simplification of Belief Propagation(BP)algorithm.Secondly,by unfolding the obtained AMP detection algorithm,a DNN is specifically designed for the optimal performance gain.For the proposed AMP-DNN,the number of trainable parameters is only related to that of layers,regardless of modulation scheme,antenna number and matrix calculation,thus facilitating fast and stable training of the network.In addition,the AMP-DNN can detect different channels under the same distribution with only one training.The superior performance of the AMP-DNN is also verified by theoretical analysis and experiments.It is found that the proposed algorithm enables the reduction of BER without signal prior information,especially in the spatially correlated channel,and has a lower computational complexity compared with existing state-of-the-art methods.
基金This work was supported by the Fundamental Research Funds for the Central Universities of China under grant no.PA2019GDQT0012by National Natural Science Foundation of China(Grant No.61971176)by the Applied Basic Research Program ofWuhan City,China,under grand 2017010201010117.
文摘To support dramatically increased traffic loads,communication networks become ultra-dense.Traditional cell association(CA)schemes are timeconsuming,forcing researchers to seek fast schemes.This paper proposes a deep Q-learning based scheme,whose main idea is to train a deep neural network(DNN)to calculate the Q values of all the state-action pairs and the cell holding the maximum Q value is associated.In the training stage,the intelligent agent continuously generates samples through the trial-anderror method to train the DNN until convergence.In the application stage,state vectors of all the users are inputted to the trained DNN to quickly obtain a satisfied CA result of a scenario with the same BS locations and user distribution.Simulations demonstrate that the proposed scheme provides satisfied CA results in a computational time several orders of magnitudes shorter than traditional schemes.Meanwhile,performance metrics,such as capacity and fairness,can be guaranteed.
基金Authors would like to thank the Deanship of Scientific Research at Shaqra University for supporting this work under Project No.g01/n04.
文摘Deep Reinforcement Learning(DRL)is a class of Machine Learning(ML)that combines Deep Learning with Reinforcement Learning and provides a framework by which a system can learn from its previous actions in an environment to select its efforts in the future efficiently.DRL has been used in many application fields,including games,robots,networks,etc.for creating autonomous systems that improve themselves with experience.It is well acknowledged that DRL is well suited to solve optimization problems in distributed systems in general and network routing especially.Therefore,a novel query routing approach called Deep Reinforcement Learning based Route Selection(DRLRS)is proposed for unstructured P2P networks based on a Deep Q-Learning algorithm.The main objective of this approach is to achieve better retrieval effectiveness with reduced searching cost by less number of connected peers,exchangedmessages,and reduced time.The simulation results shows a significantly improve searching a resource with compression to k-Random Walker and Directed BFS.Here,retrieval effectiveness,search cost in terms of connected peers,and average overhead are 1.28,106,149,respectively.
基金supported by the National Natural Science Foundation of China(41601369)the Young Talents Program of Institute of Crop Sciences,Chinese Academy of Agricultural Sciences(S2019YC04)
文摘Accurate estimation of biomass is necessary for evaluating crop growth and predicting crop yield.Biomass is also a key trait in increasing grain yield by crop breeding.The aims of this study were(i)to identify the best vegetation indices for estimating maize biomass,(ii)to investigate the relationship between biomass and leaf area index(LAI)at several growth stages,and(iii)to evaluate a biomass model using measured vegetation indices or simulated vegetation indices of Sentinel 2A and LAI using a deep neural network(DNN)algorithm.The results showed that biomass was associated with all vegetation indices.The three-band water index(TBWI)was the best vegetation index for estimating biomass and the corresponding R2,RMSE,and RRMSE were 0.76,2.84 t ha−1,and 38.22%respectively.LAI was highly correlated with biomass(R2=0.89,RMSE=2.27 t ha−1,and RRMSE=30.55%).Estimated biomass based on 15 hyperspectral vegetation indices was in a high agreement with measured biomass using the DNN algorithm(R2=0.83,RMSE=1.96 t ha−1,and RRMSE=26.43%).Biomass estimation accuracy was further increased when LAI was combined with the 15 vegetation indices(R2=0.91,RMSE=1.49 t ha−1,and RRMSE=20.05%).Relationships between the hyperspectral vegetation indices and biomass differed from relationships between simulated Sentinel 2A vegetation indices and biomass.Biomass estimation from the hyperspectral vegetation indices was more accurate than that from the simulated Sentinel 2A vegetation indices(R2=0.87,RMSE=1.84 t ha−1,and RRMSE=24.76%).The DNN algorithm was effective in improving the estimation accuracy of biomass.It provides a guideline for estimating biomass of maize using remote sensing technology and the DNN algorithm in this region.
基金provided through research grant No.0035/2019/A1 from the Science and Technology Development Fund,Macao SARthe assistantship from the Faculty of Science and Technology,University of Macao。
文摘Surface wave inversion is a key step in the application of surface waves to soil velocity profiling.Currently,a common practice for the process of inversion is that the number of soil layers is assumed to be known before using heuristic search algorithms to compute the shear wave velocity profile or the number of soil layers is considered as an optimization variable.However,an improper selection of the number of layers may lead to an incorrect shear wave velocity profile.In this study,a deep learning and genetic algorithm hybrid learning procedure is proposed to perform the surface wave inversion without the need to assume the number of soil layers.First,a deep neural network is adapted to learn from a large number of synthetic dispersion curves for inferring the layer number.Then,the shear-wave velocity profile is determined by a genetic algorithm with the known layer number.By applying this procedure to both simulated and real-world cases,the results indicate that the proposed method is reliable and efficient for surface wave inversion.
基金supported by Project No.R-2023-23 of the Deanship of Scientific Research at Majmaah University.
文摘At present,the prediction of brain tumors is performed using Machine Learning(ML)and Deep Learning(DL)algorithms.Although various ML and DL algorithms are adapted to predict brain tumors to some range,some concerns still need enhancement,particularly accuracy,sensitivity,false positive and false negative,to improve the brain tumor prediction system symmetrically.Therefore,this work proposed an Extended Deep Learning Algorithm(EDLA)to measure performance parameters such as accuracy,sensitivity,and false positive and false negative rates.In addition,these iterated measures were analyzed by comparing the EDLA method with the Convolutional Neural Network(CNN)way further using the SPSS tool,and respective graphical illustrations were shown.The results were that the mean performance measures for the proposed EDLA algorithm were calculated,and those measured were accuracy(97.665%),sensitivity(97.939%),false positive(3.012%),and false negative(3.182%)for ten iterations.Whereas in the case of the CNN,the algorithm means accuracy gained was 94.287%,mean sensitivity 95.612%,mean false positive 5.328%,and mean false negative 4.756%.These results show that the proposed EDLA method has outperformed existing algorithms,including CNN,and ensures symmetrically improved parameters.Thus EDLA algorithm introduces novelty concerning its performance and particular activation function.This proposed method will be utilized effectively in brain tumor detection in a precise and accurate manner.This algorithm would apply to brain tumor diagnosis and be involved in various medical diagnoses aftermodification.If the quantity of dataset records is enormous,then themethod’s computation power has to be updated.
基金supported by National Natural Science Foundation of China (61471109, 61501104 and 91438110)Fundamental Research Funds for the Central Universities ( N140405005 , N150401002 and N150404002)Open Fund of IPOC (BUPT, IPOC2015B006)
文摘With the development of science, economy and society, the needs for research and exploration of deep space have entered a rapid and stable development stage. Deep Space Optical Network(DSON) is expected to become an important foundation and inevitable development trend of future deepspace communication. In this paper, we design a deep space node model which is capable of combining the space division multiplexing with frequency division multiplexing. Furthermore, we propose the directional flooding routing algorithm(DFRA) for DSON based on our node model. This scheme selectively forwards the data packets in the routing, so that the energy consumption can be reduced effectively because only a portion of nodes will participate the flooding routing. Simulation results show that, compared with traditional flooding routing algorithm(TFRA), the DFRA can avoid the non-directional and blind transmission. Therefore, the energy consumption in message routing will be reduced and the lifespan of DSON can also be prolonged effectively. Although the complexity of routing implementation is slightly increased compared with TFRA, the energy of nodes can be saved and the transmission rate is obviously improved in DFRA. Thus the overall performance of DSON can be significantly improved.
基金National Natural Science Foundation of China(41901297,41806209)Science and Technology Key Project of Henan Province(202102310017)+1 种基金Key Research Projects for the Universities of Henan Province(20A170013)China Postdoctoral Science Foundation(2021M693201)。
文摘As a typical physical retrieval algorithm for retrieving atmospheric parameters,one-dimensional variational(1 DVAR)algorithm is widely used in various climate and meteorological communities and enjoys an important position in the field of microwave remote sensing.Among algorithm parameters affecting the performance of the 1 DVAR algorithm,the accuracy of the microwave radiative transfer model for calculating the simulated brightness temperature is the fundamental constraint on the retrieval accuracies of the 1 DVAR algorithm for retrieving atmospheric parameters.In this study,a deep neural network(DNN)is used to describe the nonlinear relationship between atmospheric parameters and satellite-based microwave radiometer observations,and a DNN-based radiative transfer model is developed and applied to the 1 DVAR algorithm to carry out retrieval experiments of the atmospheric temperature and humidity profiles.The retrieval results of the temperature and humidity profiles from the Microwave Humidity and Temperature Sounder(MWHTS)onboard the Feng-Yun-3(FY-3)satellite show that the DNN-based radiative transfer model can obtain higher accuracy for simulating MWHTS observations than that of the operational radiative transfer model RTTOV,and also enables the 1 DVAR algorithm to obtain higher retrieval accuracies of the temperature and humidity profiles.In this study,the DNN-based radiative transfer model applied to the 1 DVAR algorithm can fundamentally improve the retrieval accuracies of atmospheric parameters,which may provide important reference for various applied studies in atmospheric sciences.
文摘Cloud computing technology provides flexible,on-demand,and completely controlled computing resources and services are highly desirable.Despite this,with its distributed and dynamic nature and shortcomings in virtualization deployment,the cloud environment is exposed to a wide variety of cyber-attacks and security difficulties.The Intrusion Detection System(IDS)is a specialized security tool that network professionals use for the safety and security of the networks against attacks launched from various sources.DDoS attacks are becoming more frequent and powerful,and their attack pathways are continually changing,which requiring the development of new detection methods.Here the purpose of the study is to improve detection accuracy.Feature Selection(FS)is critical.At the same time,the IDS’s computational problem is limited by focusing on the most relevant elements,and its performance and accuracy increase.In this research work,the suggested Adaptive butterfly optimization algorithm(ABOA)framework is used to assess the effectiveness of a reduced feature subset during the feature selection phase,that was motivated by this motive Candidates.Accurate classification is not compromised by using an ABOA technique.The design of Deep Neural Networks(DNN)has simplified the categorization of network traffic into normal and DDoS threat traffic.DNN’s parameters can be finetuned to detect DDoS attacks better using specially built algorithms.Reduced reconstruction error,no exploding or vanishing gradients,and reduced network are all benefits of the changes outlined in this paper.When it comes to performance criteria like accuracy,precision,recall,and F1-Score are the performance measures that show the suggested architecture outperforms the other existing approaches.Hence the proposed ABOA+DNN is an excellent method for obtaining accurate predictions,with an improved accuracy rate of 99.05%compared to other existing approaches.
基金Supported by Natural Science Basic Research Plan in Shaanxi Province of China(Program No.2022JM-396)the Strategic Priority Research Program of the Chinese Academy of Sciences,Grant No.XDA23040101+4 种基金Shaanxi Province Key Research and Development Projects(Program No.2023-YBSF-437)Xi'an Shiyou University Graduate Student Innovation Fund Program(Program No.YCX2412041)State Key Laboratory of Air Traffic Management System and Technology(SKLATM202001)Tianjin Education Commission Research Program Project(2020KJ028)Fundamental Research Funds for the Central Universities(3122019132)。
文摘Developing an accurate and efficient comprehensive water quality prediction model and its assessment method is crucial for the prevention and control of water pollution.Deep learning(DL),as one of the most promising technologies today,plays a crucial role in the effective assessment of water body health,which is essential for water resource management.This study models using both the original dataset and a dataset augmented with Generative Adversarial Networks(GAN).It integrates optimization algorithms(OA)with Convolutional Neural Networks(CNN)to propose a comprehensive water quality model evaluation method aiming at identifying the optimal models for different pollutants.Specifically,after preprocessing the spectral dataset,data augmentation was conducted to obtain two datasets.Then,six new models were developed on these datasets using particle swarm optimization(PSO),genetic algorithm(GA),and simulated annealing(SA)combined with CNN to simulate and forecast the concentrations of three water pollutants:Chemical Oxygen Demand(COD),Total Nitrogen(TN),and Total Phosphorus(TP).Finally,seven model evaluation methods,including uncertainty analysis,were used to evaluate the constructed models and select the optimal models for the three pollutants.The evaluation results indicate that the GPSCNN model performed best in predicting COD and TP concentrations,while the GGACNN model excelled in TN concentration prediction.Compared to existing technologies,the proposed models and evaluation methods provide a more comprehensive and rapid approach to water body prediction and assessment,offering new insights and methods for water pollution prevention and control.
文摘In this paper,Isogeometric analysis(IGA)is effectively integrated with machine learning(ML)to investigate the bearing capacity of strip footings in layered soil profiles,with a focus on a sand-over-clay configuration.The study begins with the generation of a comprehensive dataset of 10,000 samples from IGA upper bound(UB)limit analyses,facilitating an in-depth examination of various material and geometric conditions.A hybrid deep neural network,specifically the Whale Optimization Algorithm-Deep Neural Network(WOA-DNN),is then employed to utilize these 10,000 outputs for precise bearing capacity predictions.Notably,the WOA-DNN model outperforms conventional ML techniques,offering a robust and accurate prediction tool.This innovative approach explores a broad range of design parameters,including sand layer depth,load-to-soil unit weight ratio,internal friction angle,cohesion,and footing roughness.A detailed analysis of the dataset reveals the significant influence of these parameters on bearing capacity,providing valuable insights for practical foundation design.This research demonstrates the usefulness of data-driven techniques in optimizing the design of shallow foundations within layered soil profiles,marking a significant stride in geotechnical engineering advancements.
基金This work was partially supported by the Open Funding of the Shaanxi Key Laboratory of Intelligent Processing for Big Energy Data under Grant Number IPBED3supported by the National Natural Science Foundation of China(NSFC)under Grant Number 61971189supported by the Fundamental Research Funds for the Central Universities under Grant Number 2020MS001.
文摘Collaborative vehicular networks is a key enabler to meet the stringent ultra-reliable and lowlatency communications(URLLC)requirements.A user vehicle(UV)dynamically optimizes task offloading by exploiting its collaborations with edge servers and vehicular fog servers(VFSs).However,the optimization of task offloading in highly dynamic collaborative vehicular networks faces several challenges such as URLLC guaranteeing,incomplete information,and dimensionality curse.In this paper,we first characterize URLLC in terms of queuing delay bound violation and high-order statistics of excess backlogs.Then,a Deep Reinforcement lEarning-based URLLCAware task offloading algorithM named DREAM is proposed to maximize the throughput of the UVs while satisfying the URLLC constraints in a besteffort way.Compared with existing task offloading algorithms,DREAM achieves superior performance in throughput,queuing delay,and URLLC.
文摘The performance of deep learning(DL)networks has been increased by elaborating the network structures. However, the DL netowrks have many parameters, which have a lot of influence on the performance of the network. We propose a genetic algorithm(GA) based deep belief neural network(DBNN) method for robot object recognition and grasping purpose. This method optimizes the parameters of the DBNN method, such as the number of hidden units, the number of epochs, and the learning rates, which would reduce the error rate and the network training time of object recognition. After recognizing objects, the robot performs the pick-andplace operations. We build a database of six objects for experimental purpose. Experimental results demonstrate that our method outperforms on the optimized robot object recognition and grasping tasks.
基金the Artificial Intelligence Key Laboratory of Sichuan Province(Nos.2019RYJ05)National Natural Science Foundation of China(Nos.61971107).
文摘Unmanned Aerial Vehicle(UAV)has emerged as a promising technology for the support of human activities,such as target tracking,disaster rescue,and surveillance.However,these tasks require a large computation load of image or video processing,which imposes enormous pressure on the UAV computation platform.To solve this issue,in this work,we propose an intelligent Task Offloading Algorithm(iTOA)for UAV edge computing network.Compared with existing methods,iTOA is able to perceive the network’s environment intelligently to decide the offloading action based on deep Monte Calor Tree Search(MCTS),the core algorithm of Alpha Go.MCTS will simulate the offloading decision trajectories to acquire the best decision by maximizing the reward,such as lowest latency or power consumption.To accelerate the search convergence of MCTS,we also proposed a splitting Deep Neural Network(sDNN)to supply the prior probability for MCTS.The sDNN is trained by a self-supervised learning manager.Here,the training data set is obtained from iTOA itself as its own teacher.Compared with game theory and greedy search-based methods,the proposed iTOA improves service latency performance by 33%and 60%,respectively.
文摘Early diagnosis and detection are important tasks in controlling the spread of COVID-19.A number of Deep Learning techniques has been established by researchers to detect the presence of COVID-19 using CT scan images and X-rays.However,these methods suffer from biased results and inaccurate detection of the disease.So,the current research article developed Oppositional-based Chimp Optimization Algorithm and Deep Dense Convolutional Neural Network(OCOA-DDCNN)for COVID-19 prediction using CT images in IoT environment.The proposed methodology works on the basis of two stages such as pre-processing and prediction.Initially,CT scan images generated from prospective COVID-19 are collected from open-source system using IoT devices.The collected images are then preprocessed using Gaussian filter.Gaussian filter can be utilized in the removal of unwanted noise from the collected CT scan images.Afterwards,the preprocessed images are sent to prediction phase.In this phase,Deep Dense Convolutional Neural Network(DDCNN)is applied upon the pre-processed images.The proposed classifier is optimally designed with the consideration of Oppositional-basedChimp Optimization Algorithm(OCOA).This algorithm is utilized in the selection of optimal parameters for the proposed classifier.Finally,the proposed technique is used in the prediction of COVID-19 and classify the results as either COVID-19 or non-COVID-19.The projected method was implemented in MATLAB and the performances were evaluated through statistical measurements.The proposed method was contrasted with conventional techniques such as Convolutional Neural Network-Firefly Algorithm(CNN-FA),Emperor Penguin Optimization(CNN-EPO)respectively.The results established the supremacy of the proposed model.
基金supported by Taif University Researchers Supporting Program(Project Number:TURSP-2020/195),Taif University,Saudi ArabiaThe authors extend their appreciation to the Deanship of Scientific Research at King Khalid University for funding this work under Grant Number(RGP 2/209/42)Princess Nourah bint Abdulrahman University Researchers Supporting Project Number(PNURSP2022R234),Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia.
文摘Human fall detection(FD)acts as an important part in creating sensor based alarm system,enabling physical therapists to minimize the effect of fall events and save human lives.Generally,elderly people suffer from several diseases,and fall action is a common situation which can occur at any time.In this view,this paper presents an Improved Archimedes Optimization Algorithm with Deep Learning Empowered Fall Detection(IAOA-DLFD)model to identify the fall/non-fall events.The proposed IAOA-DLFD technique comprises different levels of pre-processing to improve the input image quality.Besides,the IAOA with Capsule Network based feature extractor is derived to produce an optimal set of feature vectors.In addition,the IAOA uses to significantly boost the overall FD performance by optimal choice of CapsNet hyperparameters.Lastly,radial basis function(RBF)network is applied for determining the proper class labels of the test images.To showcase the enhanced performance of the IAOA-DLFD technique,a wide range of experiments are executed and the outcomes stated the enhanced detection outcome of the IAOA-DLFD approach over the recent methods with the accuracy of 0.997.
文摘The power transfer capability of the smart transmission gridconnected networks needs to be reduced by inter-area oscillations.Due to the fact that inter-area modes of oscillations detain and make instability of power transmission networks.This fact is more noticeable in smart grid-connected systems.The smart grid infrastructure has more renewable energy resources installed for its operation.To overcome this problem,a deep learning widearea controller is proposed for real-time parameter control and smart power grid resilience on oscillations inter-area modes.The proposed Deep Wide Area Controller(DWAC)uses the Deep Belief Network(DBN).The network weights are updated based on real-time data from Phasor measurement units.Resilience assessment based on failure probability,financial impact,and time-series data in grid failure management determine the norm H2.To demonstrate the effectiveness of the proposed framework,a time-domain simulation case study based on the IEEE-39 bus system was performed.For a one-channel attack on the test system,the resiliency index increased to 0.962,and inter-area dampingξwas reduced to 0.005.The obtained results validate the proposed deep learning algorithm’s efficiency on damping inter-area and local oscillation on the 2-channel attack as well.Results also offer robust management of power system resilience and timely control of the operating conditions.