An inverse analysis is presented to estimate line heat source in two-dimensional steady-state and transient heat transfer problems.A constant heat source is considered in the steady-state heat transfer problem(a param...An inverse analysis is presented to estimate line heat source in two-dimensional steady-state and transient heat transfer problems.A constant heat source is considered in the steady-state heat transfer problem(a parameter estimation problem)and a time-varying heat source is considered in the transient heat transfer problem(a function estimation problem).Since a general irregular 2D heat conducting body is considered,a body-fitted grid generation is used to mesh the domain.Then governing equations and associated boundary and initial conditions are transformed from the physical domain to the computational domain and finite difference method is used to solve the governing equations to obtain the temperature distribution in the body.Using an efficient,accurate,and very easy to implement sensitivity analysis incorporated in a gradient based minimization method(here,steepest descentmethod),the unknown heat source is estimated accurately.In the function estimation part,it is assumed that there is no prior information on the functional form of the heat source and the estimation process can be performed with a reasonable initial guess for the heat source.The main advantage of the proposed inverse analysis is that the sensitivity matrix(and hence,the objective function gradient with respect to the unknown variables)can be computed during the direct heat transfer solution through newyet simple explicit expressions with no need to solve extra equations such as the sensitivity and adjoint problems and impose additional computational costs comparable to the direct problem solution ones.Some test cases are presented to investigate the accuracy,efficiency,and effect of measurement error on the estimated parameter and function for the line heat source.展开更多
By taking the subsequence out of the input-output sequence of a system polluted by white noise, an independent observation sequence and its probability density are obtained and then a maximum likelihood estimation of ...By taking the subsequence out of the input-output sequence of a system polluted by white noise, an independent observation sequence and its probability density are obtained and then a maximum likelihood estimation of the identification parameters is given. In order to decrease the asymptotic error, a corrector of maximum likelihood (CML) estimation with its recursive algorithm is given. It has been proved that the corrector has smaller asymptotic error than the least square methods. A simulation example shows that the corrector of maximum likelihood estimation is of higher approximating precision to the true parameters than the least square methods.展开更多
Iteration methods and their convergences of the maximum likelihoodestimator are discussed in this paper.We study Gauss-Newton method and give a set ofsufficient conditions for the convergence of asymptotic numerical s...Iteration methods and their convergences of the maximum likelihoodestimator are discussed in this paper.We study Gauss-Newton method and give a set ofsufficient conditions for the convergence of asymptotic numerical stability.The modifiedGauss-Newton method is also studied and the sufficient conditions of the convergence arepresented.Two numerical examples are given to illustrate our results.展开更多
The calibration of transfer functions is essential for accurate pavement performance predictions in the PavementME design. Several studies have used the least square approach to calibrate these transfer functions. Lea...The calibration of transfer functions is essential for accurate pavement performance predictions in the PavementME design. Several studies have used the least square approach to calibrate these transfer functions. Least square is a widely used simplistic approach based on certain assumptions. Literature shows that these least square approach assumptions may not apply to the non-normal distributions. This study introduces a new methodology for calibrating the transverse cracking and international roughness index(IRI) models in rigid pavements using maximum likelihood estimation(MLE). Synthetic data for transverse cracking, with and without variability, are generated to illustrate the applicability of MLE using different known probability distributions(exponential,gamma, log-normal, and negative binomial). The approach uses measured data from the Michigan Department of Transportation's(MDOT) pavement management system(PMS) database for 70 jointed plain concrete pavement(JPCP) sections to calibrate and validate transfer functions. The MLE approach is combined with resampling techniques to improve the robustness of calibration coefficients. The results show that the MLE transverse cracking model using the gamma distribution consistently outperforms the least square for synthetic and observed data. For observed data, MLE estimates of parameters produced lower SSE and bias than least squares(e.g., for the transverse cracking model, the SSE values are 3.98 vs. 4.02, and the bias values are 0.00 and-0.41). Although negative binomial distribution is the most suitable fit for the IRI model for MLE, the least square results are slightly better than MLE. The bias values are-0.312 and 0.000 for the MLE and least square methods. Overall, the findings indicate that MLE is a robust method for calibration, especially for non-normally distributed data such as transverse cracking.展开更多
This paper deals with the problems of consistency and strong consistency of the maximum likelihood estimators of the mean and variance of the drift fractional Brownian motions observed at discrete time instants. Both ...This paper deals with the problems of consistency and strong consistency of the maximum likelihood estimators of the mean and variance of the drift fractional Brownian motions observed at discrete time instants. Both the central limit theorem and the Berry-Ess′een bounds for these estimators are obtained by using the Stein’s method via Malliavin calculus.展开更多
According to the principle, “The failure data is the basis of software reliability analysis”, we built a software reliability expert system (SRES) by adopting the artificial intelligence technology. By reasoning out...According to the principle, “The failure data is the basis of software reliability analysis”, we built a software reliability expert system (SRES) by adopting the artificial intelligence technology. By reasoning out the conclusion from the fitting results of failure data of a software project, the SRES can recommend users “the most suitable model” as a software reliability measurement model. We believe that the SRES can overcome the inconsistency in applications of software reliability models well. We report investigation results of singularity and parameter estimation methods of experimental models in SRES.展开更多
In this paper, we consider the construction of the approximate profile-</span><span style="font-family:""> </span><span style="font-family:Verdana;">likelihood confiden...In this paper, we consider the construction of the approximate profile-</span><span style="font-family:""> </span><span style="font-family:Verdana;">likelihood confidence intervals for parameters of the 2-parameter Weibull distribution based on small type-2 censored samples. In previous research works, the traditional Wald method has been used to construct approximate confidence intervals for the 2-parameter Weibull distribution</span><span style="font-family:""> </span><span style="font-family:Verdana;">under type-2 censoring scheme. However, the Wald technique is based on normality assumption and thus may not produce accurate interval estimates for small samples. The profile-likelihood and Wald confidence intervals are constructed for the shape and scale parameters of the 2-parameter Weibull distribution based on simulated and real type-2 censored data, and are hence compared using confidence length and coverage probability.展开更多
Compositional data, such as relative information, is a crucial aspect of machine learning and other related fields. It is typically recorded as closed data or sums to a constant, like 100%. The statistical linear mode...Compositional data, such as relative information, is a crucial aspect of machine learning and other related fields. It is typically recorded as closed data or sums to a constant, like 100%. The statistical linear model is the most used technique for identifying hidden relationships between underlying random variables of interest. However, data quality is a significant challenge in machine learning, especially when missing data is present. The linear regression model is a commonly used statistical modeling technique used in various applications to find relationships between variables of interest. When estimating linear regression parameters which are useful for things like future prediction and partial effects analysis of independent variables, maximum likelihood estimation (MLE) is the method of choice. However, many datasets contain missing observations, which can lead to costly and time-consuming data recovery. To address this issue, the expectation-maximization (EM) algorithm has been suggested as a solution for situations including missing data. The EM algorithm repeatedly finds the best estimates of parameters in statistical models that depend on variables or data that have not been observed. This is called maximum likelihood or maximum a posteriori (MAP). Using the present estimate as input, the expectation (E) step constructs a log-likelihood function. Finding the parameters that maximize the anticipated log-likelihood, as determined in the E step, is the job of the maximization (M) phase. This study looked at how well the EM algorithm worked on a made-up compositional dataset with missing observations. It used both the robust least square version and ordinary least square regression techniques. The efficacy of the EM algorithm was compared with two alternative imputation techniques, k-Nearest Neighbor (k-NN) and mean imputation (), in terms of Aitchison distances and covariance.展开更多
In the constant-stress accelerated life test, estimation issues are discussed for a generalized half-normal distribution under a log-linear life-stress model. The maximum likelihood estimates with the corresponding fi...In the constant-stress accelerated life test, estimation issues are discussed for a generalized half-normal distribution under a log-linear life-stress model. The maximum likelihood estimates with the corresponding fixed point type iterative algorithm for unknown parameters are presented, and the least square estimates of the parameters are also proposed. Meanwhile, confidence intervals of model parameters are constructed by using the asymptotic theory and bootstrap technique. Numerical illustration is given to investigate the performance of our methods.展开更多
Real time remaining useful life(RUL) prediction based on condition monitoring is an essential part in condition based maintenance(CBM). In the current methods about the real time RUL prediction of the nonlinear degrad...Real time remaining useful life(RUL) prediction based on condition monitoring is an essential part in condition based maintenance(CBM). In the current methods about the real time RUL prediction of the nonlinear degradation process, the measurement error is not considered and forecasting uncertainty is large. Therefore, an approximate analytical RUL distribution in a closed-form of a nonlinear Wiener based degradation process with measurement errors was proposed. The maximum likelihood estimation approach was used to estimate the unknown fixed parameters in the proposed model. When the newly observed data are available, the random parameter is updated by the Bayesian method to make the estimation adapt to the item's individual characteristic and reduce the uncertainty of the estimation. The simulation results show that considering measurement errors in the degradation process can significantly improve the accuracy of real time RUL prediction.展开更多
In this paper, the estimation of parameters based on a progressively type-I interval censored sample from a Rayleigh distribution is studied. Different methods of estimation are discussed. They include mid-point appro...In this paper, the estimation of parameters based on a progressively type-I interval censored sample from a Rayleigh distribution is studied. Different methods of estimation are discussed. They include mid-point approximation estima- tor, the maximum likelihood estimator, moment estimator, Bayes estimator, sampling adjustment moment estimator, sampling adjustment maximum likelihood estimator and estimator based on percentile. The estimation procedures are discussed in details and compared via Monte Carlo simulations in terms of their biases.展开更多
Estimation of the unknown mean, μ and variance, σ2 of a univariate Gaussian distribution given a single study variable x is considered. We propose an approach that does not require initialization of the sufficient u...Estimation of the unknown mean, μ and variance, σ2 of a univariate Gaussian distribution given a single study variable x is considered. We propose an approach that does not require initialization of the sufficient unknown distribution parameters. The approach is motivated by linearizing the Gaussian distribution through differential techniques, and estimating, μ and σ2 as regression coefficients using the ordinary least squares method. Two simulated datasets on hereditary traits and morphometric analysis of housefly strains are used to evaluate the proposed method (PM), the maximum likelihood estimation (MLE), and the method of moments (MM). The methods are evaluated by re-estimating the required Gaussian parameters on both large and small samples. The root mean squared error (RMSE), mean error (ME), and the standard deviation (SD) are used to assess the accuracy of the PM and MLE;confidence intervals (CIs) are also constructed for the ME estimate. The PM compares well with both the MLE and MM approaches as they all produce estimates whose errors have good asymptotic properties, also small CIs are observed for the ME using the PM and MLE. The PM can be used symbiotically with the MLE to provide initial approximations at the expectation maximization step.展开更多
By exponentiating each of the components of a finite mixture of two exponential components model by a positive parameter, several shapes of hazard rate functions are obtained. Maximum likelihood and Bayes methods, bas...By exponentiating each of the components of a finite mixture of two exponential components model by a positive parameter, several shapes of hazard rate functions are obtained. Maximum likelihood and Bayes methods, based on square error loss function and objective prior, are used to obtain estimators based on balanced square error loss function for the parameters, survival and hazard rate functions of a mixture of two exponentiated exponential components model. Approximate interval estimators of the parameters of the model are obtained.展开更多
Based on the multivariate continuous time autoregressive (CAR) model, this paper presents a new time-domain modal identification method of linear time-invariant system driven by the uniformly modulated Gaussian rand...Based on the multivariate continuous time autoregressive (CAR) model, this paper presents a new time-domain modal identification method of linear time-invariant system driven by the uniformly modulated Gaussian random excitation. The method can identify the physical parameters of the system from the response data. First, the structural dynamic equation is transformed into a continuous time autoregressive model (CAR) of order 3. Second, based on the assumption that the uniformly modulated function is approximately equal to a constant matrix in a very short period of time and on the property of the strong solution of the stochastic differential equation, the uniformly modulated function is identified piecewise. Two special situations are discussed. Finally, by virtue of the Girsanov theorem, we introduce a likelihood function, which is just a con- ditional density function. Maximizing the likelihood function gives the exact maximum likelihood estimators of model parameters. Numerical results show that the method has high precision and the computation is efficient.展开更多
In this paper, a semiparametric two-sample density ratio model is considered and the empirical likelihood method is applied to obtain the parameters estimation. A commonly occurring problem in computing is that the em...In this paper, a semiparametric two-sample density ratio model is considered and the empirical likelihood method is applied to obtain the parameters estimation. A commonly occurring problem in computing is that the empirical likelihood function may be a concaveconvex function. Here a simple Lagrange saddle point algorithm is presented for computing the saddle point of the empirical likelihood function when the Lagrange multiplier has no explicit solution. So we can obtain the maximum empirical likelihood estimation (MELE) of parameters. Monte Carlo simulations are presented to illustrate the Lagrange saddle point algorithm.展开更多
In this paper, the estimation of parameters based on a progressively typeI interval censored sample from a Pareto distribution is studied. Different methods of estimation are discussed, which include mid-point approxi...In this paper, the estimation of parameters based on a progressively typeI interval censored sample from a Pareto distribution is studied. Different methods of estimation are discussed, which include mid-point approximation estimator, the maximum likelihood estimator and moment estimator. The estimation procedures are discussed in details and compared via Monte Carlo simulations in terms of their biases.展开更多
In this paper,a modified form of the traditional inverse Lomax distribution is proposed and its characteristics are studied.The new distribution which called modified logarithmic transformed inverse Lomax distribution...In this paper,a modified form of the traditional inverse Lomax distribution is proposed and its characteristics are studied.The new distribution which called modified logarithmic transformed inverse Lomax distribution is generated by adding a new shape parameter based on logarithmic transformed method.It contains two shape and one scale parameters and has different shapes of probability density and hazard rate functions.The new shape parameter increases the flexibility of the statistical properties of the traditional inverse Lomax distribution including mean,variance,skewness and kurtosis.The moments,entropies,order statistics and other properties are discussed.Six methods of estimation are considered to estimate the distribution parameters.To compare the performance of the different estimators,a simulation study is performed.To show the flexibility and applicability of the proposed distribution two real data sets to engineering and medical fields are analyzed.The simulation results and real data analysis showed that the Anderson-Darling estimates have the smallest mean square errors among all other estimates.Also,the analysis of the real data sets showed that the traditional inverse Lomax distribution and some of its generalizations have shortcomings in modeling engineering and medical data.Our proposed distribution overcomes this shortage and provides a good fit which makes it a suitable choice to model such data sets.展开更多
In this paper, inference on parameter estimation of the generalized Rayleigh distribution are investigated for progressively type-I interval censored samples. The estimators of distribution parameters via maximum like...In this paper, inference on parameter estimation of the generalized Rayleigh distribution are investigated for progressively type-I interval censored samples. The estimators of distribution parameters via maximum likelihood, moment method and probability plot are derived, and their performance are compared based on simulation results in terms of the mean squared error and bias. A case application of plasma cell myeloma data is used for illustrating the proposed estimation methods.展开更多
We characterize the hemodynamic response changes near-infrared spectroscopy (NIRS) during the presentation of in the main olfactory bulb (MOB) of anesthetized rats with three different odorants: (i) plain air a...We characterize the hemodynamic response changes near-infrared spectroscopy (NIRS) during the presentation of in the main olfactory bulb (MOB) of anesthetized rats with three different odorants: (i) plain air as a reference (Blank), (ii) 2-heptanone (HEP), and (iii) isopropylbenzene (Ib). Odorants generate different changes in the concentrations of oxy- hemoglobin. Our results suggest that NIRS technology might be useful in discriminating various odorants in a non-invasive manner using animals with a superb olfactory system.展开更多
文摘An inverse analysis is presented to estimate line heat source in two-dimensional steady-state and transient heat transfer problems.A constant heat source is considered in the steady-state heat transfer problem(a parameter estimation problem)and a time-varying heat source is considered in the transient heat transfer problem(a function estimation problem).Since a general irregular 2D heat conducting body is considered,a body-fitted grid generation is used to mesh the domain.Then governing equations and associated boundary and initial conditions are transformed from the physical domain to the computational domain and finite difference method is used to solve the governing equations to obtain the temperature distribution in the body.Using an efficient,accurate,and very easy to implement sensitivity analysis incorporated in a gradient based minimization method(here,steepest descentmethod),the unknown heat source is estimated accurately.In the function estimation part,it is assumed that there is no prior information on the functional form of the heat source and the estimation process can be performed with a reasonable initial guess for the heat source.The main advantage of the proposed inverse analysis is that the sensitivity matrix(and hence,the objective function gradient with respect to the unknown variables)can be computed during the direct heat transfer solution through newyet simple explicit expressions with no need to solve extra equations such as the sensitivity and adjoint problems and impose additional computational costs comparable to the direct problem solution ones.Some test cases are presented to investigate the accuracy,efficiency,and effect of measurement error on the estimated parameter and function for the line heat source.
文摘By taking the subsequence out of the input-output sequence of a system polluted by white noise, an independent observation sequence and its probability density are obtained and then a maximum likelihood estimation of the identification parameters is given. In order to decrease the asymptotic error, a corrector of maximum likelihood (CML) estimation with its recursive algorithm is given. It has been proved that the corrector has smaller asymptotic error than the least square methods. A simulation example shows that the corrector of maximum likelihood estimation is of higher approximating precision to the true parameters than the least square methods.
文摘Iteration methods and their convergences of the maximum likelihoodestimator are discussed in this paper.We study Gauss-Newton method and give a set ofsufficient conditions for the convergence of asymptotic numerical stability.The modifiedGauss-Newton method is also studied and the sufficient conditions of the convergence arepresented.Two numerical examples are given to illustrate our results.
基金the Michigan Department of Transportation (MDOT) for the financial support of this study (report no. SPR1723)。
文摘The calibration of transfer functions is essential for accurate pavement performance predictions in the PavementME design. Several studies have used the least square approach to calibrate these transfer functions. Least square is a widely used simplistic approach based on certain assumptions. Literature shows that these least square approach assumptions may not apply to the non-normal distributions. This study introduces a new methodology for calibrating the transverse cracking and international roughness index(IRI) models in rigid pavements using maximum likelihood estimation(MLE). Synthetic data for transverse cracking, with and without variability, are generated to illustrate the applicability of MLE using different known probability distributions(exponential,gamma, log-normal, and negative binomial). The approach uses measured data from the Michigan Department of Transportation's(MDOT) pavement management system(PMS) database for 70 jointed plain concrete pavement(JPCP) sections to calibrate and validate transfer functions. The MLE approach is combined with resampling techniques to improve the robustness of calibration coefficients. The results show that the MLE transverse cracking model using the gamma distribution consistently outperforms the least square for synthetic and observed data. For observed data, MLE estimates of parameters produced lower SSE and bias than least squares(e.g., for the transverse cracking model, the SSE values are 3.98 vs. 4.02, and the bias values are 0.00 and-0.41). Although negative binomial distribution is the most suitable fit for the IRI model for MLE, the least square results are slightly better than MLE. The bias values are-0.312 and 0.000 for the MLE and least square methods. Overall, the findings indicate that MLE is a robust method for calibration, especially for non-normally distributed data such as transverse cracking.
基金supported by the National Science Foundations (DMS0504783 DMS0604207)National Science Fund for Distinguished Young Scholars of China (70825005)
文摘This paper deals with the problems of consistency and strong consistency of the maximum likelihood estimators of the mean and variance of the drift fractional Brownian motions observed at discrete time instants. Both the central limit theorem and the Berry-Ess′een bounds for these estimators are obtained by using the Stein’s method via Malliavin calculus.
基金the National Natural Science Foundation of China
文摘According to the principle, “The failure data is the basis of software reliability analysis”, we built a software reliability expert system (SRES) by adopting the artificial intelligence technology. By reasoning out the conclusion from the fitting results of failure data of a software project, the SRES can recommend users “the most suitable model” as a software reliability measurement model. We believe that the SRES can overcome the inconsistency in applications of software reliability models well. We report investigation results of singularity and parameter estimation methods of experimental models in SRES.
文摘In this paper, we consider the construction of the approximate profile-</span><span style="font-family:""> </span><span style="font-family:Verdana;">likelihood confidence intervals for parameters of the 2-parameter Weibull distribution based on small type-2 censored samples. In previous research works, the traditional Wald method has been used to construct approximate confidence intervals for the 2-parameter Weibull distribution</span><span style="font-family:""> </span><span style="font-family:Verdana;">under type-2 censoring scheme. However, the Wald technique is based on normality assumption and thus may not produce accurate interval estimates for small samples. The profile-likelihood and Wald confidence intervals are constructed for the shape and scale parameters of the 2-parameter Weibull distribution based on simulated and real type-2 censored data, and are hence compared using confidence length and coverage probability.
文摘Compositional data, such as relative information, is a crucial aspect of machine learning and other related fields. It is typically recorded as closed data or sums to a constant, like 100%. The statistical linear model is the most used technique for identifying hidden relationships between underlying random variables of interest. However, data quality is a significant challenge in machine learning, especially when missing data is present. The linear regression model is a commonly used statistical modeling technique used in various applications to find relationships between variables of interest. When estimating linear regression parameters which are useful for things like future prediction and partial effects analysis of independent variables, maximum likelihood estimation (MLE) is the method of choice. However, many datasets contain missing observations, which can lead to costly and time-consuming data recovery. To address this issue, the expectation-maximization (EM) algorithm has been suggested as a solution for situations including missing data. The EM algorithm repeatedly finds the best estimates of parameters in statistical models that depend on variables or data that have not been observed. This is called maximum likelihood or maximum a posteriori (MAP). Using the present estimate as input, the expectation (E) step constructs a log-likelihood function. Finding the parameters that maximize the anticipated log-likelihood, as determined in the E step, is the job of the maximization (M) phase. This study looked at how well the EM algorithm worked on a made-up compositional dataset with missing observations. It used both the robust least square version and ordinary least square regression techniques. The efficacy of the EM algorithm was compared with two alternative imputation techniques, k-Nearest Neighbor (k-NN) and mean imputation (), in terms of Aitchison distances and covariance.
基金supported by the National Natural Science Foundation of China(1150143371473187)the Natural Science Basic Research Plan in Shaanxi Province of China(2016JQ1014)
文摘In the constant-stress accelerated life test, estimation issues are discussed for a generalized half-normal distribution under a log-linear life-stress model. The maximum likelihood estimates with the corresponding fixed point type iterative algorithm for unknown parameters are presented, and the least square estimates of the parameters are also proposed. Meanwhile, confidence intervals of model parameters are constructed by using the asymptotic theory and bootstrap technique. Numerical illustration is given to investigate the performance of our methods.
基金Projects(51475462,61374138,61370031)supported by the National Natural Science Foundation of China
文摘Real time remaining useful life(RUL) prediction based on condition monitoring is an essential part in condition based maintenance(CBM). In the current methods about the real time RUL prediction of the nonlinear degradation process, the measurement error is not considered and forecasting uncertainty is large. Therefore, an approximate analytical RUL distribution in a closed-form of a nonlinear Wiener based degradation process with measurement errors was proposed. The maximum likelihood estimation approach was used to estimate the unknown fixed parameters in the proposed model. When the newly observed data are available, the random parameter is updated by the Bayesian method to make the estimation adapt to the item's individual characteristic and reduce the uncertainty of the estimation. The simulation results show that considering measurement errors in the degradation process can significantly improve the accuracy of real time RUL prediction.
基金The NSF(11271155,11001105,11071126,10926156,11071269)of Chinathe Specialized Research Fund(20110061110003,20090061120037)for the Doctoral Program of Higher Education+1 种基金the Scientific Research Fund(201100011,200903278)of Jilin Universitythe NSF(20101596,20130101066JC)of Jilin Province
文摘In this paper, the estimation of parameters based on a progressively type-I interval censored sample from a Rayleigh distribution is studied. Different methods of estimation are discussed. They include mid-point approximation estima- tor, the maximum likelihood estimator, moment estimator, Bayes estimator, sampling adjustment moment estimator, sampling adjustment maximum likelihood estimator and estimator based on percentile. The estimation procedures are discussed in details and compared via Monte Carlo simulations in terms of their biases.
文摘Estimation of the unknown mean, μ and variance, σ2 of a univariate Gaussian distribution given a single study variable x is considered. We propose an approach that does not require initialization of the sufficient unknown distribution parameters. The approach is motivated by linearizing the Gaussian distribution through differential techniques, and estimating, μ and σ2 as regression coefficients using the ordinary least squares method. Two simulated datasets on hereditary traits and morphometric analysis of housefly strains are used to evaluate the proposed method (PM), the maximum likelihood estimation (MLE), and the method of moments (MM). The methods are evaluated by re-estimating the required Gaussian parameters on both large and small samples. The root mean squared error (RMSE), mean error (ME), and the standard deviation (SD) are used to assess the accuracy of the PM and MLE;confidence intervals (CIs) are also constructed for the ME estimate. The PM compares well with both the MLE and MM approaches as they all produce estimates whose errors have good asymptotic properties, also small CIs are observed for the ME using the PM and MLE. The PM can be used symbiotically with the MLE to provide initial approximations at the expectation maximization step.
文摘By exponentiating each of the components of a finite mixture of two exponential components model by a positive parameter, several shapes of hazard rate functions are obtained. Maximum likelihood and Bayes methods, based on square error loss function and objective prior, are used to obtain estimators based on balanced square error loss function for the parameters, survival and hazard rate functions of a mixture of two exponentiated exponential components model. Approximate interval estimators of the parameters of the model are obtained.
基金supported by the National Natural Science Foundation of China (No. 50278017)
文摘Based on the multivariate continuous time autoregressive (CAR) model, this paper presents a new time-domain modal identification method of linear time-invariant system driven by the uniformly modulated Gaussian random excitation. The method can identify the physical parameters of the system from the response data. First, the structural dynamic equation is transformed into a continuous time autoregressive model (CAR) of order 3. Second, based on the assumption that the uniformly modulated function is approximately equal to a constant matrix in a very short period of time and on the property of the strong solution of the stochastic differential equation, the uniformly modulated function is identified piecewise. Two special situations are discussed. Finally, by virtue of the Girsanov theorem, we introduce a likelihood function, which is just a con- ditional density function. Maximizing the likelihood function gives the exact maximum likelihood estimators of model parameters. Numerical results show that the method has high precision and the computation is efficient.
基金Supported by the National Natural Science Foundation of China (Grant Nos. 1093100211071035+2 种基金7090101671171035)Excellent Talents Program of Liaoning Educational Committee (Grant No. 2008RC15)
文摘In this paper, a semiparametric two-sample density ratio model is considered and the empirical likelihood method is applied to obtain the parameters estimation. A commonly occurring problem in computing is that the empirical likelihood function may be a concaveconvex function. Here a simple Lagrange saddle point algorithm is presented for computing the saddle point of the empirical likelihood function when the Lagrange multiplier has no explicit solution. So we can obtain the maximum empirical likelihood estimation (MELE) of parameters. Monte Carlo simulations are presented to illustrate the Lagrange saddle point algorithm.
基金The NSF (11271155,11001105,11071126,10926156 and 11071269) of Chinathe Specialized Research Fund (20110061110003 and 20090061120037) for the Doctoral Program of Higher Education+1 种基金the General Humanities and Social Science Research Projects (11YJAZH125) Sponsored by Ministry of Educationthe Science and Technology Development Program (201201082) of Jilin Province
文摘In this paper, the estimation of parameters based on a progressively typeI interval censored sample from a Pareto distribution is studied. Different methods of estimation are discussed, which include mid-point approximation estimator, the maximum likelihood estimator and moment estimator. The estimation procedures are discussed in details and compared via Monte Carlo simulations in terms of their biases.
基金This project was funded by the Deanship Scientific Research(DSR),King Abdulaziz University,Jeddah under Grant No.(RG-14-130-41)The author,therefore,acknowledge with thanks DSR for technical and financial support.
文摘In this paper,a modified form of the traditional inverse Lomax distribution is proposed and its characteristics are studied.The new distribution which called modified logarithmic transformed inverse Lomax distribution is generated by adding a new shape parameter based on logarithmic transformed method.It contains two shape and one scale parameters and has different shapes of probability density and hazard rate functions.The new shape parameter increases the flexibility of the statistical properties of the traditional inverse Lomax distribution including mean,variance,skewness and kurtosis.The moments,entropies,order statistics and other properties are discussed.Six methods of estimation are considered to estimate the distribution parameters.To compare the performance of the different estimators,a simulation study is performed.To show the flexibility and applicability of the proposed distribution two real data sets to engineering and medical fields are analyzed.The simulation results and real data analysis showed that the Anderson-Darling estimates have the smallest mean square errors among all other estimates.Also,the analysis of the real data sets showed that the traditional inverse Lomax distribution and some of its generalizations have shortcomings in modeling engineering and medical data.Our proposed distribution overcomes this shortage and provides a good fit which makes it a suitable choice to model such data sets.
文摘In this paper, inference on parameter estimation of the generalized Rayleigh distribution are investigated for progressively type-I interval censored samples. The estimators of distribution parameters via maximum likelihood, moment method and probability plot are derived, and their performance are compared based on simulation results in terms of the mean squared error and bias. A case application of plasma cell myeloma data is used for illustrating the proposed estimation methods.
基金The MKE(The Ministry of Knowledge Economy),Korea,under the ITRC(Information Technology Research Center)support program supervised by the NIPA(National IT Industry Promotion Agency) (NIPA-2012-H0301-12-2006)Brain Research Center(BRC)(2012K001127),The MKE(10033634-2012-21)National Research Foundation of Korea(NRF)(2012-0005787)
文摘We characterize the hemodynamic response changes near-infrared spectroscopy (NIRS) during the presentation of in the main olfactory bulb (MOB) of anesthetized rats with three different odorants: (i) plain air as a reference (Blank), (ii) 2-heptanone (HEP), and (iii) isopropylbenzene (Ib). Odorants generate different changes in the concentrations of oxy- hemoglobin. Our results suggest that NIRS technology might be useful in discriminating various odorants in a non-invasive manner using animals with a superb olfactory system.