The numerical simulation of the fluid flow and the flexible rod(s)interaction is more complicated and has lower efficiency due to the high computational cost.In this paper,a semi-resolved model coupling the computatio...The numerical simulation of the fluid flow and the flexible rod(s)interaction is more complicated and has lower efficiency due to the high computational cost.In this paper,a semi-resolved model coupling the computational fluid dynamics and the flexible rod dynamics is proposed using a two-way domain expansion method.The gov-erning equations of the flexible rod dynamics are discretized and solved by the finite element method,and the fluid flow is simulated by the finite volume method.The interaction between fluids and solid rods is modeled by introducing body force terms into the momentum equations.Referred to the traditional semi-resolved numerical model,an anisotropic Gaussian kernel function method is proposed to specify the interactive forces between flu-ids and solid bodies for non-circle rod cross-sections.A benchmark of the flow passing around a single flexible plate with a rectangular cross-section is used to validate the algorithm.Focused on the engineering applications,a test case of a finite patch of cylinders is implemented to validate the accuracy and efficiency of the coupled model.展开更多
This article addresses the nonlinear state estimation problem where the conventional Gaussian assumption is completely relaxed.Here,the uncertainties in process and measurements are assumed non-Gaussian,such that the ...This article addresses the nonlinear state estimation problem where the conventional Gaussian assumption is completely relaxed.Here,the uncertainties in process and measurements are assumed non-Gaussian,such that the maximum correntropy criterion(MCC)is chosen to replace the conventional minimum mean square error criterion.Furthermore,the MCC is realized using Gaussian as well as Cauchy kernels by defining an appropriate cost function.Simulation results demonstrate the superior estimation accuracy of the developed estimators for two nonlinear estimation problems.展开更多
With the vigorous expansion of nonlinear adaptive filtering with real-valued kernel functions,its counterpart complex kernel adaptive filtering algorithms were also sequentially proposed to solve the complex-valued no...With the vigorous expansion of nonlinear adaptive filtering with real-valued kernel functions,its counterpart complex kernel adaptive filtering algorithms were also sequentially proposed to solve the complex-valued nonlinear problems arising in almost all real-world applications.This paper firstly presents two schemes of the complex Gaussian kernel-based adaptive filtering algorithms to illustrate their respective characteristics.Then the theoretical convergence behavior of the complex Gaussian kernel least mean square(LMS)algorithm is studied by using the fixed dictionary strategy.The simulation results demonstrate that the theoretical curves predicted by the derived analytical models consistently coincide with the Monte Carlo simulation results in both transient and steady-state stages for two introduced complex Gaussian kernel LMS algonthms using non-circular complex data.The analytical models are able to be regard as a theoretical tool evaluating ability and allow to compare with mean square error(MSE)performance among of complex kernel LMS(KLMS)methods according to the specified kernel bandwidth and the length of dictionary.展开更多
Panel data combine cross-section data and time series data. If the cross-section is locations, there is a need to check the correlation among locations. ρ and λ are parameters in generalized spatial model to cover e...Panel data combine cross-section data and time series data. If the cross-section is locations, there is a need to check the correlation among locations. ρ and λ are parameters in generalized spatial model to cover effect of correlation between locations. Value of ρ or λ will influence the goodness of fit model, so it is important to make parameter estimation. The effect of another location is covered by making contiguity matrix until it gets spatial weighted matrix (W). There are some types of W—uniform W, binary W, kernel Gaussian W and some W from real case of economics condition or transportation condition from locations. This study is aimed to compare uniform W and kernel Gaussian W in spatial panel data model using RMSE value. The result of analysis showed that uniform weight had RMSE value less than kernel Gaussian model. Uniform W had stabil value for all the combinations.展开更多
Predicting the power obtained at the output of the photovoltaic(PV)system is fundamental for the optimum use of the PV system.However,it varies at different times of the day depending on intermittent and nonlinear env...Predicting the power obtained at the output of the photovoltaic(PV)system is fundamental for the optimum use of the PV system.However,it varies at different times of the day depending on intermittent and nonlinear environmen-tal conditions including solar irradiation,temperature and the wind speed,Short-term power prediction is vital in PV systems to reconcile generation and demand in terms of the cost and capacity of the reserve.In this study,a Gaussian kernel based Support Vector Regression(SVR)prediction model using multiple input variables is proposed for estimating the maximum power obtained from using per-turb observation method in the different irradiation and the different temperatures for a short-term in the DC-DC boost converter at the PV system.The performance of the kernel-based prediction model depends on the availability of a suitable ker-nel function that matches the learning objective,since an unsuitable kernel func-tion or hyper parameter tuning results in significantly poor performance.In this study for thefirst time in the literature both maximum power is obtained at max-imum power point and short-term maximum power estimation is made.While evaluating the performance of the suggested model,the PV power data simulated at variable irradiations and variable temperatures for one day in the PV system simulated in MATLAB were used.The maximum power obtained from the simu-lated system at maximum irradiance was 852.6 W.The accuracy and the perfor-mance evaluation of suggested forecasting model were identified utilizing the computing error statistics such as root mean square error(RMSE)and mean square error(MSE)values.MSE and RMSE rates which obtained were 4.5566*10-04 and 0.0213 using ANN model.MSE and RMSE rates which obtained were 13.0000*10-04 and 0.0362 using SWD-FFNN model.Using SVR model,1.1548*10-05 MSE and 0.0034 RMSE rates were obtained.In the short-term maximum power prediction,SVR gave higher prediction performance according to ANN and SWD-FFNN.展开更多
In polyester fiber industrial processes,the prediction of key performance indicators is vital for product quality.The esterification process is an indispensable step in the polyester polymerization process.It has the ...In polyester fiber industrial processes,the prediction of key performance indicators is vital for product quality.The esterification process is an indispensable step in the polyester polymerization process.It has the characteristics of strong coupling,nonlinearity and complex mechanism.To solve these problems,we put forward a multi-output Gaussian process regression(MGPR)model based on the combined kernel function for the polyester esterification process.Since the seasonal and trend decomposition using loess(STL)can extract the periodic and trend characteristics of time series,a combined kernel function based on the STL and the kernel function analysis is constructed for the MGPR.The effectiveness of the proposed model is verified by the actual polyester esterification process data collected from fiber production.展开更多
Silicone material extrusion(MEX)is widely used for processing liquids and pastes.Owing to the uneven linewidth and elastic extrusion deformation caused by material accumulation,products may exhibit geometric errors an...Silicone material extrusion(MEX)is widely used for processing liquids and pastes.Owing to the uneven linewidth and elastic extrusion deformation caused by material accumulation,products may exhibit geometric errors and performance defects,leading to a decline in product quality and affecting its service life.This study proposes a process parameter optimization method that considers the mechanical properties of printed specimens and production costs.To improve the quality of silicone printing samples and reduce production costs,three machine learning models,kernel extreme learning machine(KELM),support vector regression(SVR),and random forest(RF),were developed to predict these three factors.Training data were obtained through a complete factorial experiment.A new dataset is obtained using the Euclidean distance method,which assigns the elimination factor.It is trained with Bayesian optimization algorithms for parameter optimization,the new dataset is input into the improved double Gaussian extreme learning machine,and finally obtains the improved KELM model.The results showed improved prediction accuracy over SVR and RF.Furthermore,a multi-objective optimization framework was proposed by combining genetic algorithm technology with the improved KELM model.The effectiveness and reasonableness of the model algorithm were verified by comparing the optimized results with the experimental results.展开更多
In order to meet the demand of online optimal running, a novel soft sensor modeling approach based on Gaussian processes was proposed. The approach is moderately simple to implement and use without loss of performance...In order to meet the demand of online optimal running, a novel soft sensor modeling approach based on Gaussian processes was proposed. The approach is moderately simple to implement and use without loss of performance. It is trained by optimizing the hyperparameters using the scaled conjugate gradient algorithm with the squared exponential covariance function employed. Experimental simulations show that the soft sensor modeling approach has the advantage via a real-world example in a refinery. Meanwhile, the method opens new possibilities for application of kernel methods to potential fields.展开更多
The resolution of ocean reanalysis datasets is generally low because of the limited resolution of their associated numerical models.Low-resolution ocean reanalysis datasets are therefore usually interpolated to provid...The resolution of ocean reanalysis datasets is generally low because of the limited resolution of their associated numerical models.Low-resolution ocean reanalysis datasets are therefore usually interpolated to provide an initial or boundary field for higher-resolution regional ocean models.However,traditional interpolation methods(nearest neighbor interpolation,bilinear interpolation,and bicubic interpolation)lack physical constraints and can generate significant errors at land-sea boundaries and around islands.In this paper,a machine learning method is used to design an interpolation algorithm based on Gaussian process regression.The method uses a multiscale kernel function to process two-dimensional space meteorological ocean processes and introduces multiscale physical feature information(sea surface wind stress,sea surface heat flux,and ocean current velocity).This greatly improves the spatial resolution of ocean features and the interpolation accuracy.The eff ectiveness of the algorithm was validated through interpolation experiments relating to sea surface temperature(SST).The root mean square error(RMSE)of the interpolation algorithm was 38.9%,43.7%,and 62.4%lower than that of bilinear interpolation,bicubic interpolation,and nearest neighbor interpolation,respectively.The interpolation accuracy was also significantly better in off shore area and around islands.The algorithm has an acceptable runtime cost and good temporal and spatial generalizability.展开更多
This paper presents a classifier named kernel-based nonlinear representor (KNR) for optimal representation of pattern features. Adopting the Gaussian kernel, with the kernel width adaptively estimated by a simple tech...This paper presents a classifier named kernel-based nonlinear representor (KNR) for optimal representation of pattern features. Adopting the Gaussian kernel, with the kernel width adaptively estimated by a simple technique, it is applied to eigenface classification. Experimental results on the ORL face database show that it improves performance by around 6 points, in classification rate, over the Euclidean distance classifier.展开更多
It is important to have a reasonable estimation of sediment transport rate with respect to its significant role in the planning and management of water resources projects. The complicate nature of sediment transport i...It is important to have a reasonable estimation of sediment transport rate with respect to its significant role in the planning and management of water resources projects. The complicate nature of sediment transport in gravel-bed rivers causes inaccuracies of empirical formulas in the prediction of this phenomenon. Artificial intelligences as alternative approaches can provide solutions to such complex problems. The present study aimed at investigating the capability of kernel-based approaches in predicting total sediment loads and identification of influential parameters of total sediment transport. For this purpose, Gaussian process regression(GPR), Support vector machine(SVM) and kernel extreme learning machine(KELM) are applied to enhance the prediction level of total sediment loads in 19 mountain gravel-bed streams and rivers located in the United States. Several parameters based on two scenarios are investigated and consecutive predicted results are compared with some well-known formulas. Scenario 1 considers only hydraulic characteristics and on the other side, the second scenario was formed using hydraulic and sediment properties. The obtained results reveal that using the parameters of hydraulic conditions asinputs gives a good estimation of total sediment loads. Furthermore, it was revealed that KELM method with input parameters of Froude number(Fr), ratio of average velocity(V) to shear velocity(U*) and shields number(θ) yields a correlation coefficient(R) of 0.951, a Nash-Sutcliffe efficiency(NSE) of 0.903 and root mean squared error(RMSE) of 0.021 and indicates superior results compared with other methods. Performing sensitivity analysis showed that the ratio of average velocity to shear flow velocity and the Froude number are the most effective parameters in predicting total sediment loads of gravel-bed rivers.展开更多
Data-driven paradigms are well-known and salient demands of future wireless communication. Empowered by big data and machine learning techniques,next-generation data-driven communication systems will be intelligent wi...Data-driven paradigms are well-known and salient demands of future wireless communication. Empowered by big data and machine learning techniques,next-generation data-driven communication systems will be intelligent with unique characteristics of expressiveness, scalability, interpretability, and uncertainty awareness, which can confidently involve diversified latent demands and personalized services in the foreseeable future. In this paper, we review a promising family of nonparametric Bayesian machine learning models,i.e., Gaussian processes(GPs), and their applications in wireless communication. Since GP models demonstrate outstanding expressive and interpretable learning ability with uncertainty, they are particularly suitable for wireless communication. Moreover, they provide a natural framework for collaborating data and empirical models(DEM). Specifically, we first envision three-level motivations of data-driven wireless communication using GP models. Then, we present the background of the GPs in terms of covariance structure and model inference. The expressiveness of the GP model using various interpretable kernels, including stationary, non-stationary, deep and multi-task kernels,is showcased. Furthermore, we review the distributed GP models with promising scalability, which is suitable for applications in wireless networks with a large number of distributed edge devices. Finally, we list representative solutions and promising techniques that adopt GP models in various wireless communication applications.展开更多
We present a Gaussian process(GP)approach,called Gaussian process hydrodynamics(GPH)for approximating the solution to the Euler and Navier-Stokes(NS)equations.Similar to smoothed particle hydrodynamics(SPH),GPH is a L...We present a Gaussian process(GP)approach,called Gaussian process hydrodynamics(GPH)for approximating the solution to the Euler and Navier-Stokes(NS)equations.Similar to smoothed particle hydrodynamics(SPH),GPH is a Lagrangian particle-based approach that involves the tracking of a finite number of particles transported by a flow.However,these particles do not represent mollified particles of matter but carry discrete/partial information about the continuous flow.Closure is achieved by placing a divergence-free GP priorξon the velocity field and conditioning it on the vorticity at the particle locations.Known physics(e.g.,the Richardson cascade and velocityincrement power laws)is incorporated into the GP prior by using physics-informed additive kernels.This is equivalent to expressingξas a sum of independent GPsξl,which we call modes,acting at different scales(each modeξlself-activates to represent the formation of eddies at the corresponding scales).This approach enables a quantitative analysis of the Richardson cascade through the analysis of the activation of these modes,and enables us to analyze coarse-grain turbulence statistically rather than deterministically.Because GPH is formulated by using the vorticity equations,it does not require solving a pressure equation.By enforcing incompressibility and fluid-structure boundary conditions through the selection of a kernel,GPH requires significantly fewer particles than SPH.Because GPH has a natural probabilistic interpretation,the numerical results come with uncertainty estimates,enabling their incorporation into an uncertainty quantification(UQ)pipeline and adding/removing particles(quanta of information)in an adapted manner.The proposed approach is suitable for analysis because it inherits the complexity of state-of-the-art solvers for dense kernel matrices and results in a natural definition of turbulence as information loss.Numerical experiments support the importance of selecting physics-informed kernels and illustrate the major impact of such kernels on the accuracy and stability.Because the proposed approach uses a Bayesian interpretation,it naturally enables data assimilation and predictions and estimations by mixing simulation data and experimental data.展开更多
基金supported by Shanghai 2021“Science and Technology Innovation Action Plan”:Social Development Science and Technology Research Project(Grant No.21DZ1202703).
文摘The numerical simulation of the fluid flow and the flexible rod(s)interaction is more complicated and has lower efficiency due to the high computational cost.In this paper,a semi-resolved model coupling the computational fluid dynamics and the flexible rod dynamics is proposed using a two-way domain expansion method.The gov-erning equations of the flexible rod dynamics are discretized and solved by the finite element method,and the fluid flow is simulated by the finite volume method.The interaction between fluids and solid rods is modeled by introducing body force terms into the momentum equations.Referred to the traditional semi-resolved numerical model,an anisotropic Gaussian kernel function method is proposed to specify the interactive forces between flu-ids and solid bodies for non-circle rod cross-sections.A benchmark of the flow passing around a single flexible plate with a rectangular cross-section is used to validate the algorithm.Focused on the engineering applications,a test case of a finite patch of cylinders is implemented to validate the accuracy and efficiency of the coupled model.
基金Rahul Radhakrishnan received the B.Tech.degree in Applied Electronics and Instrumentation from the Government Engineering College,Calicut,India,in 2010 and the M.Tech.degreein Control Systems from the Department of Electrical Engineering,National Institute of Technology Kurukshetra,India,in 2013.He received the Ph.D.degree from the Department of Electrical Engineering,Indian Institute of Technology Patna,India,in 2018.Currently,he is workingasan Assistant Professor in the Department of Electrical Engineering,Sardar Vallabhbhai National Institute of Technology,Surat,Gujarat,India.His main research interests include nonlinear filtering,aerospace,and underwater target tracking.
文摘This article addresses the nonlinear state estimation problem where the conventional Gaussian assumption is completely relaxed.Here,the uncertainties in process and measurements are assumed non-Gaussian,such that the maximum correntropy criterion(MCC)is chosen to replace the conventional minimum mean square error criterion.Furthermore,the MCC is realized using Gaussian as well as Cauchy kernels by defining an appropriate cost function.Simulation results demonstrate the superior estimation accuracy of the developed estimators for two nonlinear estimation problems.
基金supported by the National Natural Science Foundation of China(61001153,61271415,61401499,61531015)the Fundamental Research Funds for the Central Universities(3102014JCQ01010,3102014ZD0041)the Opening Research Foundation of State Key Laboratory of Underwater Information Processing and Control(9140C231002130C23085)
文摘With the vigorous expansion of nonlinear adaptive filtering with real-valued kernel functions,its counterpart complex kernel adaptive filtering algorithms were also sequentially proposed to solve the complex-valued nonlinear problems arising in almost all real-world applications.This paper firstly presents two schemes of the complex Gaussian kernel-based adaptive filtering algorithms to illustrate their respective characteristics.Then the theoretical convergence behavior of the complex Gaussian kernel least mean square(LMS)algorithm is studied by using the fixed dictionary strategy.The simulation results demonstrate that the theoretical curves predicted by the derived analytical models consistently coincide with the Monte Carlo simulation results in both transient and steady-state stages for two introduced complex Gaussian kernel LMS algonthms using non-circular complex data.The analytical models are able to be regard as a theoretical tool evaluating ability and allow to compare with mean square error(MSE)performance among of complex kernel LMS(KLMS)methods according to the specified kernel bandwidth and the length of dictionary.
文摘Panel data combine cross-section data and time series data. If the cross-section is locations, there is a need to check the correlation among locations. ρ and λ are parameters in generalized spatial model to cover effect of correlation between locations. Value of ρ or λ will influence the goodness of fit model, so it is important to make parameter estimation. The effect of another location is covered by making contiguity matrix until it gets spatial weighted matrix (W). There are some types of W—uniform W, binary W, kernel Gaussian W and some W from real case of economics condition or transportation condition from locations. This study is aimed to compare uniform W and kernel Gaussian W in spatial panel data model using RMSE value. The result of analysis showed that uniform weight had RMSE value less than kernel Gaussian model. Uniform W had stabil value for all the combinations.
文摘Predicting the power obtained at the output of the photovoltaic(PV)system is fundamental for the optimum use of the PV system.However,it varies at different times of the day depending on intermittent and nonlinear environmen-tal conditions including solar irradiation,temperature and the wind speed,Short-term power prediction is vital in PV systems to reconcile generation and demand in terms of the cost and capacity of the reserve.In this study,a Gaussian kernel based Support Vector Regression(SVR)prediction model using multiple input variables is proposed for estimating the maximum power obtained from using per-turb observation method in the different irradiation and the different temperatures for a short-term in the DC-DC boost converter at the PV system.The performance of the kernel-based prediction model depends on the availability of a suitable ker-nel function that matches the learning objective,since an unsuitable kernel func-tion or hyper parameter tuning results in significantly poor performance.In this study for thefirst time in the literature both maximum power is obtained at max-imum power point and short-term maximum power estimation is made.While evaluating the performance of the suggested model,the PV power data simulated at variable irradiations and variable temperatures for one day in the PV system simulated in MATLAB were used.The maximum power obtained from the simu-lated system at maximum irradiance was 852.6 W.The accuracy and the perfor-mance evaluation of suggested forecasting model were identified utilizing the computing error statistics such as root mean square error(RMSE)and mean square error(MSE)values.MSE and RMSE rates which obtained were 4.5566*10-04 and 0.0213 using ANN model.MSE and RMSE rates which obtained were 13.0000*10-04 and 0.0362 using SWD-FFNN model.Using SVR model,1.1548*10-05 MSE and 0.0034 RMSE rates were obtained.In the short-term maximum power prediction,SVR gave higher prediction performance according to ANN and SWD-FFNN.
基金Natural Science Foundation of Shanghai,China(No.19ZR1402300)。
文摘In polyester fiber industrial processes,the prediction of key performance indicators is vital for product quality.The esterification process is an indispensable step in the polyester polymerization process.It has the characteristics of strong coupling,nonlinearity and complex mechanism.To solve these problems,we put forward a multi-output Gaussian process regression(MGPR)model based on the combined kernel function for the polyester esterification process.Since the seasonal and trend decomposition using loess(STL)can extract the periodic and trend characteristics of time series,a combined kernel function based on the STL and the kernel function analysis is constructed for the MGPR.The effectiveness of the proposed model is verified by the actual polyester esterification process data collected from fiber production.
基金supported by the National Key R&D Program of China(No.2022YFA1005204l)。
文摘Silicone material extrusion(MEX)is widely used for processing liquids and pastes.Owing to the uneven linewidth and elastic extrusion deformation caused by material accumulation,products may exhibit geometric errors and performance defects,leading to a decline in product quality and affecting its service life.This study proposes a process parameter optimization method that considers the mechanical properties of printed specimens and production costs.To improve the quality of silicone printing samples and reduce production costs,three machine learning models,kernel extreme learning machine(KELM),support vector regression(SVR),and random forest(RF),were developed to predict these three factors.Training data were obtained through a complete factorial experiment.A new dataset is obtained using the Euclidean distance method,which assigns the elimination factor.It is trained with Bayesian optimization algorithms for parameter optimization,the new dataset is input into the improved double Gaussian extreme learning machine,and finally obtains the improved KELM model.The results showed improved prediction accuracy over SVR and RF.Furthermore,a multi-objective optimization framework was proposed by combining genetic algorithm technology with the improved KELM model.The effectiveness and reasonableness of the model algorithm were verified by comparing the optimized results with the experimental results.
文摘In order to meet the demand of online optimal running, a novel soft sensor modeling approach based on Gaussian processes was proposed. The approach is moderately simple to implement and use without loss of performance. It is trained by optimizing the hyperparameters using the scaled conjugate gradient algorithm with the squared exponential covariance function employed. Experimental simulations show that the soft sensor modeling approach has the advantage via a real-world example in a refinery. Meanwhile, the method opens new possibilities for application of kernel methods to potential fields.
基金Supported by the National Natural Science Foundation of China(Nos.41675097,41375113)。
文摘The resolution of ocean reanalysis datasets is generally low because of the limited resolution of their associated numerical models.Low-resolution ocean reanalysis datasets are therefore usually interpolated to provide an initial or boundary field for higher-resolution regional ocean models.However,traditional interpolation methods(nearest neighbor interpolation,bilinear interpolation,and bicubic interpolation)lack physical constraints and can generate significant errors at land-sea boundaries and around islands.In this paper,a machine learning method is used to design an interpolation algorithm based on Gaussian process regression.The method uses a multiscale kernel function to process two-dimensional space meteorological ocean processes and introduces multiscale physical feature information(sea surface wind stress,sea surface heat flux,and ocean current velocity).This greatly improves the spatial resolution of ocean features and the interpolation accuracy.The eff ectiveness of the algorithm was validated through interpolation experiments relating to sea surface temperature(SST).The root mean square error(RMSE)of the interpolation algorithm was 38.9%,43.7%,and 62.4%lower than that of bilinear interpolation,bicubic interpolation,and nearest neighbor interpolation,respectively.The interpolation accuracy was also significantly better in off shore area and around islands.The algorithm has an acceptable runtime cost and good temporal and spatial generalizability.
文摘This paper presents a classifier named kernel-based nonlinear representor (KNR) for optimal representation of pattern features. Adopting the Gaussian kernel, with the kernel width adaptively estimated by a simple technique, it is applied to eigenface classification. Experimental results on the ORL face database show that it improves performance by around 6 points, in classification rate, over the Euclidean distance classifier.
文摘It is important to have a reasonable estimation of sediment transport rate with respect to its significant role in the planning and management of water resources projects. The complicate nature of sediment transport in gravel-bed rivers causes inaccuracies of empirical formulas in the prediction of this phenomenon. Artificial intelligences as alternative approaches can provide solutions to such complex problems. The present study aimed at investigating the capability of kernel-based approaches in predicting total sediment loads and identification of influential parameters of total sediment transport. For this purpose, Gaussian process regression(GPR), Support vector machine(SVM) and kernel extreme learning machine(KELM) are applied to enhance the prediction level of total sediment loads in 19 mountain gravel-bed streams and rivers located in the United States. Several parameters based on two scenarios are investigated and consecutive predicted results are compared with some well-known formulas. Scenario 1 considers only hydraulic characteristics and on the other side, the second scenario was formed using hydraulic and sediment properties. The obtained results reveal that using the parameters of hydraulic conditions asinputs gives a good estimation of total sediment loads. Furthermore, it was revealed that KELM method with input parameters of Froude number(Fr), ratio of average velocity(V) to shear velocity(U*) and shields number(θ) yields a correlation coefficient(R) of 0.951, a Nash-Sutcliffe efficiency(NSE) of 0.903 and root mean squared error(RMSE) of 0.021 and indicates superior results compared with other methods. Performing sensitivity analysis showed that the ratio of average velocity to shear flow velocity and the Froude number are the most effective parameters in predicting total sediment loads of gravel-bed rivers.
基金supported in part by the National Key R&D Program of China with grant No. 2018YFB1800800by the Basic Research Project No. HZQB-KCZYZ-2021067 of Hetao Shenzhen-HK S&T Cooperation Zone+3 种基金by Natural Science Foundation of China (NSFC) with grants No. 92067202 and No. 62106212by Shenzhen Outstanding Talents Training Fund 202002by Guangdong Research Projects No. 2017ZT07X152 and No. 2019CX01X104by China Postdoctoral Science Foundation with grant No. 2020M671899。
文摘Data-driven paradigms are well-known and salient demands of future wireless communication. Empowered by big data and machine learning techniques,next-generation data-driven communication systems will be intelligent with unique characteristics of expressiveness, scalability, interpretability, and uncertainty awareness, which can confidently involve diversified latent demands and personalized services in the foreseeable future. In this paper, we review a promising family of nonparametric Bayesian machine learning models,i.e., Gaussian processes(GPs), and their applications in wireless communication. Since GP models demonstrate outstanding expressive and interpretable learning ability with uncertainty, they are particularly suitable for wireless communication. Moreover, they provide a natural framework for collaborating data and empirical models(DEM). Specifically, we first envision three-level motivations of data-driven wireless communication using GP models. Then, we present the background of the GPs in terms of covariance structure and model inference. The expressiveness of the GP model using various interpretable kernels, including stationary, non-stationary, deep and multi-task kernels,is showcased. Furthermore, we review the distributed GP models with promising scalability, which is suitable for applications in wireless networks with a large number of distributed edge devices. Finally, we list representative solutions and promising techniques that adopt GP models in various wireless communication applications.
基金supported by the Air Force Office of Scientific Research under the MURI award number FA9550-20-1-0358(Machine Learning and Physics-Based Modeling and Simulation)by the Department of Energy under the award number DE-SC0023163(SEA-CROGS:Scalable,Efficient,and Accelerated Causal Reasoning Operators,Graphs and Spikes for Earth and Embedded Systems)。
文摘We present a Gaussian process(GP)approach,called Gaussian process hydrodynamics(GPH)for approximating the solution to the Euler and Navier-Stokes(NS)equations.Similar to smoothed particle hydrodynamics(SPH),GPH is a Lagrangian particle-based approach that involves the tracking of a finite number of particles transported by a flow.However,these particles do not represent mollified particles of matter but carry discrete/partial information about the continuous flow.Closure is achieved by placing a divergence-free GP priorξon the velocity field and conditioning it on the vorticity at the particle locations.Known physics(e.g.,the Richardson cascade and velocityincrement power laws)is incorporated into the GP prior by using physics-informed additive kernels.This is equivalent to expressingξas a sum of independent GPsξl,which we call modes,acting at different scales(each modeξlself-activates to represent the formation of eddies at the corresponding scales).This approach enables a quantitative analysis of the Richardson cascade through the analysis of the activation of these modes,and enables us to analyze coarse-grain turbulence statistically rather than deterministically.Because GPH is formulated by using the vorticity equations,it does not require solving a pressure equation.By enforcing incompressibility and fluid-structure boundary conditions through the selection of a kernel,GPH requires significantly fewer particles than SPH.Because GPH has a natural probabilistic interpretation,the numerical results come with uncertainty estimates,enabling their incorporation into an uncertainty quantification(UQ)pipeline and adding/removing particles(quanta of information)in an adapted manner.The proposed approach is suitable for analysis because it inherits the complexity of state-of-the-art solvers for dense kernel matrices and results in a natural definition of turbulence as information loss.Numerical experiments support the importance of selecting physics-informed kernels and illustrate the major impact of such kernels on the accuracy and stability.Because the proposed approach uses a Bayesian interpretation,it naturally enables data assimilation and predictions and estimations by mixing simulation data and experimental data.