Global variance reduction is a bottleneck in Monte Carlo shielding calculations.The global variance reduction problem requires that the statistical error of the entire space is uniform.This study proposed a grid-AIS m...Global variance reduction is a bottleneck in Monte Carlo shielding calculations.The global variance reduction problem requires that the statistical error of the entire space is uniform.This study proposed a grid-AIS method for the global variance reduction problem based on the AIS method,which was implemented in the Monte Carlo program MCShield.The proposed method was validated using the VENUS-Ⅲ international benchmark problem and a self-shielding calculation example.The results from the VENUS-Ⅲ benchmark problem showed that the grid-AIS method achieved a significant reduction in the variance of the statistical errors of the MESH grids,decreasing from 1.08×10^(-2) to 3.84×10^(-3),representing a 64.00% reduction.This demonstrates that the grid-AIS method is effective in addressing global issues.The results of the selfshielding calculation demonstrate that the grid-AIS method produced accurate computational results.Moreover,the grid-AIS method exhibited a computational efficiency approximately one order of magnitude higher than that of the AIS method and approximately two orders of magnitude higher than that of the conventional Monte Carlo method.展开更多
The scale of fluctuation is one of the vital parameters for the application of random field theory to the reliability analysis of geotechnical engineering. In the present study, the fluctuation function method and wei...The scale of fluctuation is one of the vital parameters for the application of random field theory to the reliability analysis of geotechnical engineering. In the present study, the fluctuation function method and weighted curve fitting method were presented to make the calculation more simple and accurate. The vertical scales of fluctuation of typical layers of Tianjin Port were calculated based on a number of engineering geotechnical investigation data, which can be guidance to other projects in this area. Meanwhile, the influences of sample interval and type of soil index on the scale of fluctuation were analyzed, according to which, the principle of determining the scale of fluctuation when the sample interval changes was defined. It can be obtained that the scale of fluctuation is the basic attribute reflecting spatial variability of soil, therefore, the scales of fluctuation calculated according to different soil indexes should be basically the same. The non-correlation distance method was improved, and the principle of determining the variance reduction function was also discussed.展开更多
A global variance reduction(GVR)method based on the SPN method is proposed.First,the global multi-group cross-sections are obtained by Monte Carlo(MC)global homogenization.Then,the SP3 equation is solved to obtain the...A global variance reduction(GVR)method based on the SPN method is proposed.First,the global multi-group cross-sections are obtained by Monte Carlo(MC)global homogenization.Then,the SP3 equation is solved to obtain the global flux distribution.Finally,the global weight windows are approximated by the global flux distribution,and the GVR simulation is performed.This GVR method is implemented as an automatic process in the RMC code.The SP3-coupled GVR method was tested on a modified version of C5 G7 benchmark with a thickened water shield.The results show that the SP3-coupled GVR method can improve the efficiency of MC criticality calculation.展开更多
To security support large-scale intelligent applications,distributed machine learning based on blockchain is an intuitive solution scheme.However,the distributed machine learning is difficult to train due to that the ...To security support large-scale intelligent applications,distributed machine learning based on blockchain is an intuitive solution scheme.However,the distributed machine learning is difficult to train due to that the corresponding optimization solver algorithms converge slowly,which highly demand on computing and memory resources.To overcome the challenges,we propose a distributed computing framework for L-BFGS optimization algorithm based on variance reduction method,which is a lightweight,few additional cost and parallelized scheme for the model training process.To validate the claims,we have conducted several experiments on multiple classical datasets.Results show that our proposed computing framework can steadily accelerate the training process of solver in either local mode or distributed mode.展开更多
Value-at-Risk (VaR) estimation via Monte Carlo (MC) simulation is studied here. The variance reduction technique is proposed in order to speed up MC algorithm. The algorithm for estimating the probability of high ...Value-at-Risk (VaR) estimation via Monte Carlo (MC) simulation is studied here. The variance reduction technique is proposed in order to speed up MC algorithm. The algorithm for estimating the probability of high portfolio losses (more general risk measure) based on the Cross - Entropy importance sampling is developed. This algorithm can easily be applied in any light- or heavy-tailed case without an extra adaptation. Besides, it does not loose in the performance in comparison to other known methods. A numerical study in both cases is performed and the variance reduction rate is compared with other known methods. The problem of VaR estimation using procedures for estimating the probability of high portfolio losses is also discussed.展开更多
This paper studies a distributed policy gradient in collaborative multi-agent reinforcement learning(MARL),where agents communicating over a network aim to find an optimal policy that maximizes the average of all the ...This paper studies a distributed policy gradient in collaborative multi-agent reinforcement learning(MARL),where agents communicating over a network aim to find an optimal policy that maximizes the average of all the agents'local returns.To address the challenges of high variance and bias in stochastic policy gradients for MARL,this paper proposes a distributed policy gradient method with variance reduction,combined with gradient tracking to correct the bias resulting from the difference between local and global gradients.The authors also utilize importance sampling to solve the distribution shift problem in the sampling process.The authors then show that the proposed algorithm finds anε-approximate stationary point,where the convergence depends on the number of iterations,the mini-batch size,the epoch size,the problem parameters,and the network topology.The authors further establish the sample and communication complexity to obtain anε-approximate stationary point.Finally,numerical experiments are performed to validate the effectiveness of the proposed algorithm.展开更多
Distributed stochastic gradient descent and its variants have been widely adopted in the training of machine learning models,which apply multiple workers in parallel.Among them,local-based algorithms,including Local S...Distributed stochastic gradient descent and its variants have been widely adopted in the training of machine learning models,which apply multiple workers in parallel.Among them,local-based algorithms,including Local SGD and FedAvg,have gained much attention due to their superior properties,such as low communication cost and privacypreserving.Nevertheless,when the data distribution on workers is non-identical,local-based algorithms would encounter a significant degradation in the convergence rate.In this paper,we propose Variance Reduced Local SGD(VRL-SGD)to deal with the heterogeneous data.Without extra communication cost,VRL-SGD can reduce the gradient variance among workers caused by the heterogeneous data,and thus it prevents local-based algorithms from slow convergence rate.Moreover,we present VRL-SGD-W with an effectivewarm-up mechanism for the scenarios,where the data among workers are quite diverse.Benefiting from eliminating the impact of such heterogeneous data,we theoretically prove that VRL-SGD achieves a linear iteration speedup with lower communication complexity even if workers access non-identical datasets.We conduct experiments on three machine learning tasks.The experimental results demonstrate that VRL-SGD performs impressively better than Local SGD for the heterogeneous data and VRL-SGD-W is much robust under high data variance among workers.展开更多
Main lobe jamming seriously affects the detection performance of airborne early warning radar.The joint processing of polarization-space has become an effective way to suppress the main lobe jamming.To avoid the main ...Main lobe jamming seriously affects the detection performance of airborne early warning radar.The joint processing of polarization-space has become an effective way to suppress the main lobe jamming.To avoid the main beam distortion and wave crest migration caused by the main lobe jamming in adaptive beamforming,a joint optimization algorithm based on adaptive polarization canceller(APC)and stochastic variance reduction gradient descent(SVRGD)is proposed.First,the polarization plane array structure and receiving signal model based on primary and auxiliary array cancellation are established,and an APC iterative algorithm model is constructed to calculate the optimal weight vector of the auxiliary channel.Second,based on the stochastic gradient descent principle,the variance reduction method is introduced to modify the gradient through internal and external iteration to reduce the variance of the stochastic gradient estimation,the airspace optimal weight vector is calculated and the equivalent weight vector is introduced to measure the beamforming effect.Third,by setting up a planar polarization array simulation scene,the performance of the algorithm against the interference of the main lobe and the side lobe is analyzed,and the effectiveness of the algorithm is verified under the condition of short snapshot number and certain signal to interference plus noise ratio.展开更多
The special purpose Monte Carlo program McMesh was used to study neutron transport in coal slurries for on stream determination of the slurry parameters. McMesh uses the mesh weight window method as the major variance...The special purpose Monte Carlo program McMesh was used to study neutron transport in coal slurries for on stream determination of the slurry parameters. McMesh uses the mesh weight window method as the major variance reduction technique with other methods such as exponential transforms and correlated sampling included as options. There was good agreement between the calculated results from McMesh and from MCNP, a general Monte Carlo program, but McMesh was more efficient and more convenient.展开更多
This paper studies a class of nonconvex composite optimization, whose objective is a summation of an average of nonconvex(weakly) smooth functions and a convex nonsmooth function, where the gradient of the former func...This paper studies a class of nonconvex composite optimization, whose objective is a summation of an average of nonconvex(weakly) smooth functions and a convex nonsmooth function, where the gradient of the former function has the H o¨lder continuity. By exploring the structure of such kind of problems, we first propose a proximal(quasi-)Newton algorithm wPQN(Proximal quasi-Newton algorithm for weakly smooth optimization) and investigate its theoretical complexities to find an approximate solution. Then we propose a stochastic variant algorithm wPSQN(Proximal stochastic quasi-Newton algorithm for weakly smooth optimization), which allows a random subset of component functions to be used at each iteration. Moreover, motivated by recent success of variance reduction techniques, we propose two variance reduced algorithms,wPSQN-SVRG and wPSQN-SARAH, and investigate their computational complexity separately.展开更多
A traditional method of Monte Carlo computer simulation is to obtain uniformly distributed random numbers on the interval from zero to one from a linear congruential generator (LCG) or other methods. Random variates c...A traditional method of Monte Carlo computer simulation is to obtain uniformly distributed random numbers on the interval from zero to one from a linear congruential generator (LCG) or other methods. Random variates can then be obtained by the inverse transformation technique applied to random numbers. The random variates can then be used as input to a computer simulation. A response variable is obtained from the simulation results. The response variable may be biased for various reasons. One reason may be the presence of small traces of serial correlation in the random numbers. The purpose of this paper is to introduce an alternative method of response variable acquisition by a power transformation applied to the response variable. The power transformation produces a new variable that is negatively correlated with the response variable. The response variable is then regressed on its power transformation to convert the units of the power transformed variable back to those of the original response variable. A weighted combination of these two variables gives the final estimate. The combined estimate is shown to have negligible bias. The correlations of various antithetic variates obtained from the power transformation are derived and illustrated to provide insights for this research and for future research into this method.展开更多
We consider a fundamental problem in the field of machine learning—structural risk minimization,which can be represented as the average of a large number of smooth component functions plus a simple and convex(but pos...We consider a fundamental problem in the field of machine learning—structural risk minimization,which can be represented as the average of a large number of smooth component functions plus a simple and convex(but possibly non-smooth)function.In this paper,we propose a novel proximal variance reducing stochastic method building on the introduced Point-SAGA.Our method achieves two proximal operator calculations by combining the fast Douglas–Rachford splitting and refers to the scheme of the FISTA algorithm in the choice of momentum factors.We show that the objective function value converges to the iteration point at the rate of O(1/k)when each loss function is convex and smooth.In addition,we prove that our method achieves a linear convergence rate for strongly convex and smooth loss functions.Experiments demonstrate the effectiveness of the proposed algorithm,especially when the loss function is ill-conditioned with good acceleration.展开更多
Driven by large-scale optimization problems arising from machine learning,the development of stochastic optimization methods has witnessed a huge growth.Numerous types of methods have been developed based on vanilla s...Driven by large-scale optimization problems arising from machine learning,the development of stochastic optimization methods has witnessed a huge growth.Numerous types of methods have been developed based on vanilla stochastic gradient descent method.However,for most algorithms,convergence rate in stochastic setting cannot simply match that in deterministic setting.Better understanding the gap between deterministic and stochastic optimization is the main goal of this paper.Specifically,we are interested in Nesterov acceleration of gradient-based approaches.In our study,we focus on acceleration of stochastic mirror descent method with implicit regularization property.Assuming that the problem objective is smooth and convex or strongly convex,our analysis prescribes the method parameters which ensure fast convergence of the estimation error and satisfied numerical performance.展开更多
Two revised regional importance measures(RIMs),that is,revised contribution to variance of sample mean(RCVSM)and revised contribution to variance of sample variance(RCVSV),are defined herein by using the revised means...Two revised regional importance measures(RIMs),that is,revised contribution to variance of sample mean(RCVSM)and revised contribution to variance of sample variance(RCVSV),are defined herein by using the revised means of sample mean and sample variance,which vary with the reduced range of the epistemic parameter.The RCVSM and RCVSV can be computed by the same set of samples,thus no extra computational cost is introduced with respect to the computations of CVSM and CVSV.From the plots of RCVSM and RCVSV,accurate quantitative information on variance reductions of sample mean and sample variance can be read because of reduced upper bound of the range of the epistemic parameter.For general form of quadratic polynomial output,the analytical solutions of the original and the revised RIMs are given.Numerical example is employed and results demonstrate that the analytical results are consistent and accurate.An engineering example is applied to testify the validity and rationality of the revised RIMs,which can give instructions to the engineers about how to reduce variance of sample mean and sample variance by reducing the range of epistemic parameters.展开更多
This paper deals with the Monte Carlo Simulation in a Bayesian framework.It shows the impor-tance of the use of Monte Carlo experiments through refined descriptive sampling within the autoregressive model Xt=ρXt-1+Yt...This paper deals with the Monte Carlo Simulation in a Bayesian framework.It shows the impor-tance of the use of Monte Carlo experiments through refined descriptive sampling within the autoregressive model Xt=ρXt-1+Yt,where 0<ρ<1 and the errors Yt are independent ran-dom variables following an exponential distribution of parameterθ.To achieve this,a Bayesian Autoregressive Adaptive Refined Descriptive Sampling(B2ARDS)algorithm is proposed to esti-mate the parametersρandθof such a model by a Bayesian method.We have used the same prior as the one already used by some authors,and computed their properties when the Nor-mality error assumption is released to an exponential distribution.The results show that B2ARDS algorithm provides accurate and efficient point estimates.展开更多
基金supported by the Platform Development Foundation of the China Institute for Radiation Protection(No.YP21030101)the National Natural Science Foundation of China(General Program)(Nos.12175114,U2167209)+1 种基金the National Key R&D Program of China(No.2021YFF0603600)the Tsinghua University Initiative Scientific Research Program(No.20211080081).
文摘Global variance reduction is a bottleneck in Monte Carlo shielding calculations.The global variance reduction problem requires that the statistical error of the entire space is uniform.This study proposed a grid-AIS method for the global variance reduction problem based on the AIS method,which was implemented in the Monte Carlo program MCShield.The proposed method was validated using the VENUS-Ⅲ international benchmark problem and a self-shielding calculation example.The results from the VENUS-Ⅲ benchmark problem showed that the grid-AIS method achieved a significant reduction in the variance of the statistical errors of the MESH grids,decreasing from 1.08×10^(-2) to 3.84×10^(-3),representing a 64.00% reduction.This demonstrates that the grid-AIS method is effective in addressing global issues.The results of the selfshielding calculation demonstrate that the grid-AIS method produced accurate computational results.Moreover,the grid-AIS method exhibited a computational efficiency approximately one order of magnitude higher than that of the AIS method and approximately two orders of magnitude higher than that of the conventional Monte Carlo method.
基金Supported by the National Natural Science Foundation of China(No.41272323)Tianjin Natural Science Foundation(No.13JCZDJC 35300)
文摘The scale of fluctuation is one of the vital parameters for the application of random field theory to the reliability analysis of geotechnical engineering. In the present study, the fluctuation function method and weighted curve fitting method were presented to make the calculation more simple and accurate. The vertical scales of fluctuation of typical layers of Tianjin Port were calculated based on a number of engineering geotechnical investigation data, which can be guidance to other projects in this area. Meanwhile, the influences of sample interval and type of soil index on the scale of fluctuation were analyzed, according to which, the principle of determining the scale of fluctuation when the sample interval changes was defined. It can be obtained that the scale of fluctuation is the basic attribute reflecting spatial variability of soil, therefore, the scales of fluctuation calculated according to different soil indexes should be basically the same. The non-correlation distance method was improved, and the principle of determining the variance reduction function was also discussed.
基金Supported by the Shanghai Sailing Program,China(No.21YF1421100)the Startup Fund for Youngman Research at SJTU。
文摘A global variance reduction(GVR)method based on the SPN method is proposed.First,the global multi-group cross-sections are obtained by Monte Carlo(MC)global homogenization.Then,the SP3 equation is solved to obtain the global flux distribution.Finally,the global weight windows are approximated by the global flux distribution,and the GVR simulation is performed.This GVR method is implemented as an automatic process in the RMC code.The SP3-coupled GVR method was tested on a modified version of C5 G7 benchmark with a thickened water shield.The results show that the SP3-coupled GVR method can improve the efficiency of MC criticality calculation.
基金partly supported by National Key Basic Research Program of China(2016YFB1000100)partly supported by National Natural Science Foundation of China(NO.61402490)。
文摘To security support large-scale intelligent applications,distributed machine learning based on blockchain is an intuitive solution scheme.However,the distributed machine learning is difficult to train due to that the corresponding optimization solver algorithms converge slowly,which highly demand on computing and memory resources.To overcome the challenges,we propose a distributed computing framework for L-BFGS optimization algorithm based on variance reduction method,which is a lightweight,few additional cost and parallelized scheme for the model training process.To validate the claims,we have conducted several experiments on multiple classical datasets.Results show that our proposed computing framework can steadily accelerate the training process of solver in either local mode or distributed mode.
文摘Value-at-Risk (VaR) estimation via Monte Carlo (MC) simulation is studied here. The variance reduction technique is proposed in order to speed up MC algorithm. The algorithm for estimating the probability of high portfolio losses (more general risk measure) based on the Cross - Entropy importance sampling is developed. This algorithm can easily be applied in any light- or heavy-tailed case without an extra adaptation. Besides, it does not loose in the performance in comparison to other known methods. A numerical study in both cases is performed and the variance reduction rate is compared with other known methods. The problem of VaR estimation using procedures for estimating the probability of high portfolio losses is also discussed.
基金supported in part by the National Natural Science Foundation of China under Grant Nos.62003245,72171172,and 92367101the National Natural Science Foundation of China Basic Science Research Center Program under Grant No.62088101+1 种基金the Aeronautical Science Foundation of China under Grant No.2023Z066038001Shanghai Municipal Science and Technology Major Project under Grant No.2021SHZDZX0100。
文摘This paper studies a distributed policy gradient in collaborative multi-agent reinforcement learning(MARL),where agents communicating over a network aim to find an optimal policy that maximizes the average of all the agents'local returns.To address the challenges of high variance and bias in stochastic policy gradients for MARL,this paper proposes a distributed policy gradient method with variance reduction,combined with gradient tracking to correct the bias resulting from the difference between local and global gradients.The authors also utilize importance sampling to solve the distribution shift problem in the sampling process.The authors then show that the proposed algorithm finds anε-approximate stationary point,where the convergence depends on the number of iterations,the mini-batch size,the epoch size,the problem parameters,and the network topology.The authors further establish the sample and communication complexity to obtain anε-approximate stationary point.Finally,numerical experiments are performed to validate the effectiveness of the proposed algorithm.
基金This research was partially supported by grants from the National Key Research and Development Program of China(No.2018YFC0832101)the National Natural Science Foundation of China(Grant Nos.U20A20229 and 61922073).
文摘Distributed stochastic gradient descent and its variants have been widely adopted in the training of machine learning models,which apply multiple workers in parallel.Among them,local-based algorithms,including Local SGD and FedAvg,have gained much attention due to their superior properties,such as low communication cost and privacypreserving.Nevertheless,when the data distribution on workers is non-identical,local-based algorithms would encounter a significant degradation in the convergence rate.In this paper,we propose Variance Reduced Local SGD(VRL-SGD)to deal with the heterogeneous data.Without extra communication cost,VRL-SGD can reduce the gradient variance among workers caused by the heterogeneous data,and thus it prevents local-based algorithms from slow convergence rate.Moreover,we present VRL-SGD-W with an effectivewarm-up mechanism for the scenarios,where the data among workers are quite diverse.Benefiting from eliminating the impact of such heterogeneous data,we theoretically prove that VRL-SGD achieves a linear iteration speedup with lower communication complexity even if workers access non-identical datasets.We conduct experiments on three machine learning tasks.The experimental results demonstrate that VRL-SGD performs impressively better than Local SGD for the heterogeneous data and VRL-SGD-W is much robust under high data variance among workers.
基金supported by the Aviation Science Foundation of China(20175596020)。
文摘Main lobe jamming seriously affects the detection performance of airborne early warning radar.The joint processing of polarization-space has become an effective way to suppress the main lobe jamming.To avoid the main beam distortion and wave crest migration caused by the main lobe jamming in adaptive beamforming,a joint optimization algorithm based on adaptive polarization canceller(APC)and stochastic variance reduction gradient descent(SVRGD)is proposed.First,the polarization plane array structure and receiving signal model based on primary and auxiliary array cancellation are established,and an APC iterative algorithm model is constructed to calculate the optimal weight vector of the auxiliary channel.Second,based on the stochastic gradient descent principle,the variance reduction method is introduced to modify the gradient through internal and external iteration to reduce the variance of the stochastic gradient estimation,the airspace optimal weight vector is calculated and the equivalent weight vector is introduced to measure the beamforming effect.Third,by setting up a planar polarization array simulation scene,the performance of the algorithm against the interference of the main lobe and the side lobe is analyzed,and the effectiveness of the algorithm is verified under the condition of short snapshot number and certain signal to interference plus noise ratio.
文摘The special purpose Monte Carlo program McMesh was used to study neutron transport in coal slurries for on stream determination of the slurry parameters. McMesh uses the mesh weight window method as the major variance reduction technique with other methods such as exponential transforms and correlated sampling included as options. There was good agreement between the calculated results from McMesh and from MCNP, a general Monte Carlo program, but McMesh was more efficient and more convenient.
基金Supported by National Natural Science Foundation of China(Grant No.11871453)The Major Key Project of PCL(Grant No.PCL2022A05).
文摘This paper studies a class of nonconvex composite optimization, whose objective is a summation of an average of nonconvex(weakly) smooth functions and a convex nonsmooth function, where the gradient of the former function has the H o¨lder continuity. By exploring the structure of such kind of problems, we first propose a proximal(quasi-)Newton algorithm wPQN(Proximal quasi-Newton algorithm for weakly smooth optimization) and investigate its theoretical complexities to find an approximate solution. Then we propose a stochastic variant algorithm wPSQN(Proximal stochastic quasi-Newton algorithm for weakly smooth optimization), which allows a random subset of component functions to be used at each iteration. Moreover, motivated by recent success of variance reduction techniques, we propose two variance reduced algorithms,wPSQN-SVRG and wPSQN-SARAH, and investigate their computational complexity separately.
文摘A traditional method of Monte Carlo computer simulation is to obtain uniformly distributed random numbers on the interval from zero to one from a linear congruential generator (LCG) or other methods. Random variates can then be obtained by the inverse transformation technique applied to random numbers. The random variates can then be used as input to a computer simulation. A response variable is obtained from the simulation results. The response variable may be biased for various reasons. One reason may be the presence of small traces of serial correlation in the random numbers. The purpose of this paper is to introduce an alternative method of response variable acquisition by a power transformation applied to the response variable. The power transformation produces a new variable that is negatively correlated with the response variable. The response variable is then regressed on its power transformation to convert the units of the power transformed variable back to those of the original response variable. A weighted combination of these two variables gives the final estimate. The combined estimate is shown to have negligible bias. The correlations of various antithetic variates obtained from the power transformation are derived and illustrated to provide insights for this research and for future research into this method.
文摘We consider a fundamental problem in the field of machine learning—structural risk minimization,which can be represented as the average of a large number of smooth component functions plus a simple and convex(but possibly non-smooth)function.In this paper,we propose a novel proximal variance reducing stochastic method building on the introduced Point-SAGA.Our method achieves two proximal operator calculations by combining the fast Douglas–Rachford splitting and refers to the scheme of the FISTA algorithm in the choice of momentum factors.We show that the objective function value converges to the iteration point at the rate of O(1/k)when each loss function is convex and smooth.In addition,we prove that our method achieves a linear convergence rate for strongly convex and smooth loss functions.Experiments demonstrate the effectiveness of the proposed algorithm,especially when the loss function is ill-conditioned with good acceleration.
文摘Driven by large-scale optimization problems arising from machine learning,the development of stochastic optimization methods has witnessed a huge growth.Numerous types of methods have been developed based on vanilla stochastic gradient descent method.However,for most algorithms,convergence rate in stochastic setting cannot simply match that in deterministic setting.Better understanding the gap between deterministic and stochastic optimization is the main goal of this paper.Specifically,we are interested in Nesterov acceleration of gradient-based approaches.In our study,we focus on acceleration of stochastic mirror descent method with implicit regularization property.Assuming that the problem objective is smooth and convex or strongly convex,our analysis prescribes the method parameters which ensure fast convergence of the estimation error and satisfied numerical performance.
基金supported by the National Natural Science Foundation of China(Grant No.51175425)the Special Research Fund for the Doctoral Program of Higher Education of China(Grant No.20116102110003)
文摘Two revised regional importance measures(RIMs),that is,revised contribution to variance of sample mean(RCVSM)and revised contribution to variance of sample variance(RCVSV),are defined herein by using the revised means of sample mean and sample variance,which vary with the reduced range of the epistemic parameter.The RCVSM and RCVSV can be computed by the same set of samples,thus no extra computational cost is introduced with respect to the computations of CVSM and CVSV.From the plots of RCVSM and RCVSV,accurate quantitative information on variance reductions of sample mean and sample variance can be read because of reduced upper bound of the range of the epistemic parameter.For general form of quadratic polynomial output,the analytical solutions of the original and the revised RIMs are given.Numerical example is employed and results demonstrate that the analytical results are consistent and accurate.An engineering example is applied to testify the validity and rationality of the revised RIMs,which can give instructions to the engineers about how to reduce variance of sample mean and sample variance by reducing the range of epistemic parameters.
文摘This paper deals with the Monte Carlo Simulation in a Bayesian framework.It shows the impor-tance of the use of Monte Carlo experiments through refined descriptive sampling within the autoregressive model Xt=ρXt-1+Yt,where 0<ρ<1 and the errors Yt are independent ran-dom variables following an exponential distribution of parameterθ.To achieve this,a Bayesian Autoregressive Adaptive Refined Descriptive Sampling(B2ARDS)algorithm is proposed to esti-mate the parametersρandθof such a model by a Bayesian method.We have used the same prior as the one already used by some authors,and computed their properties when the Nor-mality error assumption is released to an exponential distribution.The results show that B2ARDS algorithm provides accurate and efficient point estimates.