[Objective] The paper was to study the effect of tobacco blown spot on the yield and output value of tobacco leaf.[Method]The upper,middle and lower leaves in tobacco plant were selected during the harvest period of t...[Objective] The paper was to study the effect of tobacco blown spot on the yield and output value of tobacco leaf.[Method]The upper,middle and lower leaves in tobacco plant were selected during the harvest period of tobacco to carry out loss rate estimation of yield and output value of tobacco leaf caused by different disease levels of brown spot.Regression correlation analysis was also conducted.[Result]The disease levels of brown spot had extremely significant strong negative correlation with single leaf weight of tobacco leaf,and it had extremely significant strong positive correlation with the loss rate of single leaf weight.The increase speed of loss rate of single leaf weight of middle and upper leaves was obviously faster than that of lower leaves.The loss rates of single leaf weight of upper,middle and lower leaves were 3.18%-28.95%,3.43%-28.88% and 10.07%-26.90%,respectively.The higher the disease level of blown spot was,the lower the yield and output value of tobacco leaf was,and the corresponding loss rate was also higher.Correlation analysis showed that the disease level of blown spot had extremely significant strong negative correlation with the yield and output value of tobacco leaf,and it had extremely significant strong positive correlation with the loss rate of yield and output value.The negative impact of blown spot on the output value of tobacco leaf was far greater than that on the yield.The highest loss rate of the yield of tobacco leaf was 28.56%,while the highest loss rate of output value reached 89.67%.[Conclusion] The study provided theoretical basis for accurately holding the critical period for the control of blown spot,thus reducing the damage on tobacco leaf and improving the output value of tobacco leaf.展开更多
As the raised cosine shaping filter is often employed in practical satellite communication system,the envelope fluctuation at the symbol transition point is decreased which leads to the failure of the common wavelet a...As the raised cosine shaping filter is often employed in practical satellite communication system,the envelope fluctuation at the symbol transition point is decreased which leads to the failure of the common wavelet algorithm under low SNR.Accordingly,a method of blind symbol rate estimation using signal preprocessing and Haar wavelet is proposed in this paper.Firstly,the effect of filter shaping can be reduced by the signal preprocessing.Then,the optimal scale factor is searched and the signal is processed and analyzed by the Haar wavelet transform.Finally,the symbol rate line is extracted and a nonlinear filter method is inducted for improving the estimation performance.Theoretical analysis and computer simulation show the efficiency of the proposed algorithm under low SNR and small roll-off factor.展开更多
Accurate insight into the heat generation rate(HGR) of lithium-ion batteries(LIBs) is one of key issues for battery management systems to formulate thermal safety warning strategies in advance.For this reason,this pap...Accurate insight into the heat generation rate(HGR) of lithium-ion batteries(LIBs) is one of key issues for battery management systems to formulate thermal safety warning strategies in advance.For this reason,this paper proposes a novel physics-informed neural network(PINN) approach for HGR estimation of LIBs under various driving conditions.Specifically,a single particle model with thermodynamics(SPMT) is first constructed for extracting the critical physical knowledge related with battery HGR.Subsequently,the surface concentrations of positive and negative electrodes in battery SPMT model are integrated into the bidirectional long short-term memory(BiLSTM) networks as physical information.And combined with other feature variables,a novel PINN approach to achieve HGR estimation of LIBs with higher accuracy is constituted.Additionally,some critical hyperparameters of BiLSTM used in PINN approach are determined through Bayesian optimization algorithm(BOA) and the results of BOA-based BiLSTM are compared with other traditional BiLSTM/LSTM networks.Eventually,combined with the HGR data generated from the validated virtual battery,it is proved that the proposed approach can well predict the battery HGR under the dynamic stress test(DST) and worldwide light vehicles test procedure(WLTP),the mean absolute error under DST is 0.542 kW/m^(3),and the root mean square error under WLTP is1.428 kW/m^(3)at 25℃.Lastly,the investigation results of this paper also show a new perspective in the application of the PINN approach in battery HGR estimation.展开更多
It has been shown that remote monitoring of pulmonary activity can be achieved using ultra-wideband (UWB) systems, which shows promise in home healthcare,rescue,and security applications.In this paper,we first present...It has been shown that remote monitoring of pulmonary activity can be achieved using ultra-wideband (UWB) systems, which shows promise in home healthcare,rescue,and security applications.In this paper,we first present a multi-ray propagation model for UWB signal,which is traveling through the human thorax and is reflected on the air/dry-skin/fat/muscle interfaces,A geometry-based statistical channel model is then developed for simulating the reception of UWB signals in the indoor propagation environment.This model enables replication of time-varying multipath profiles due to the displacement of a human chest.Subsequently, a UWB distributed cognitive radar system (UWB-DCRS) is developed for the robust detection of chest cavity motion and the accurate estimation of respiration rate.The analytical framework can serve as a basis in the planning and evaluation of future rheasurement programs.We also provide a case study on how the antenna beamwidth affects the estimation of respiration rate based on the proposed propagation models and system architecture.展开更多
Estimation of density and hazard rate is very important to the reliability analysis of a system. In order to estimate the density and hazard rate of a hazard rate monotonously decreasing system, a new nonparametric es...Estimation of density and hazard rate is very important to the reliability analysis of a system. In order to estimate the density and hazard rate of a hazard rate monotonously decreasing system, a new nonparametric estimator is put forward. The estimator is based on the kernel function method and optimum algorithm. Numerical experiment shows that the method is accurate enough and can be used in many cases.展开更多
The surface air temperature lapse rate(SATLR)plays a key role in the hydrological,glacial and ecological modeling,the regional downscaling,and the reconstruction of high-resolution surface air temperature.However,how ...The surface air temperature lapse rate(SATLR)plays a key role in the hydrological,glacial and ecological modeling,the regional downscaling,and the reconstruction of high-resolution surface air temperature.However,how to accurately estimate the SATLR in the regions with complex terrain and climatic condition has been a great challenge for researchers.The geographically weighted regression(GWR)model was applied in this paper to estimate the SATLR in China’s mainland,and then the assessment and validation for the GWR model were made.The spatial pattern of regression residuals which was identified by Moran’s Index indicated that the GWR model was broadly reasonable for the estimation of SATLR.The small mean absolute error(MAE)in all months indicated that the GWR model had a strong predictive ability for the surface air temperature.The comparison with previous studies for the seasonal mean SATLR further evidenced the accuracy of the estimation.Therefore,the GWR method has potential application for estimating the SATLR in a large region with complex terrain and climatic condition.展开更多
In this paper, we construct the E·B estimation for parameter function of one-side truncated distribution under NA samples. Also, we obtain its convergence rate at O(n-q), where q is approaching 1/2.
In this paper, we define the Weibull kernel and use it to nonparametric estimation of the probability density function (pdf) and the hazard rate function for independent and identically distributed (iid) data. The bia...In this paper, we define the Weibull kernel and use it to nonparametric estimation of the probability density function (pdf) and the hazard rate function for independent and identically distributed (iid) data. The bias, variance and the optimal bandwidth of the proposed estimator are investigated. Moreover, the asymptotic normality of the proposed estimator is investigated. The performance of the proposed estimator is tested using simulation study and real data.展开更多
The goal of quantum key distribution(QKD) is to generate secret key shared between two distant players,Alice and Bob. We present the connection between sampling rate and erroneous judgment probability when estimating ...The goal of quantum key distribution(QKD) is to generate secret key shared between two distant players,Alice and Bob. We present the connection between sampling rate and erroneous judgment probability when estimating error rate with random sampling method, and propose a method to compute optimal sampling rate, which can maximize final secure key generation rate. These results can be applied to choose the optimal sampling rate and improve the performance of QKD system with finite resources.展开更多
The application of Monte Carlo method in estimating rate constants for polymerization was described, A general program for Monte Carlo simulation was determined first according to the elementary reactions, after which...The application of Monte Carlo method in estimating rate constants for polymerization was described, A general program for Monte Carlo simulation was determined first according to the elementary reactions, after which the rate constants could be automatically adjusted and optimized through comparing of experimental and simulated data with an error expression that meeted a given minimum criterion. Such a process made the rate constants to be estimated without kinetic model in advance. The technique was applied to estimate the rate constants of the bulk polymerization of styrene catalyzed by the rare earth catalyst. The esthnated results showed the Monte Carlo method was feasible and effective for estimating rate constants in polymerization engineering.展开更多
The optimality of a density estimation on Besov spaces Bsr,q(R) for the Lp risk was established by Donoho, Johnstone, Kerkyacharian and Picard (“Density estimation by wavelet thresholding,” The Annals of Statistics,...The optimality of a density estimation on Besov spaces Bsr,q(R) for the Lp risk was established by Donoho, Johnstone, Kerkyacharian and Picard (“Density estimation by wavelet thresholding,” The Annals of Statistics, Vol. 24, No. 2, 1996, pp. 508-539.). To show the lower bound of optimal rates of convergence Rn(Bsr,q, p), they use Korostelev and Assouad lemmas. However, the conditions of those two lemmas are difficult to be verified. This paper aims to give another proof for that bound by using Fano’s Lemma, which looks a little simpler. In addition, our method can be used in many other statistical models for lower bounds of estimations.展开更多
In a survival analysis context, we suggest a new method to estimate the piecewise constant hazard rate model. The method provides an automatic procedure to find the number and location of cut points and to estimate th...In a survival analysis context, we suggest a new method to estimate the piecewise constant hazard rate model. The method provides an automatic procedure to find the number and location of cut points and to estimate the hazard on each cut interval. Estimation is performed through a penalized likelihood using an adaptive ridge procedure. A bootstrap procedure is proposed in order to derive valid statistical inference taking both into account the variability of the estimate and the variability in the choice of the cut points. The new method is applied both to simulated data and to the Mayo Clinic trial on primary biliary cirrhosis. The algorithm implementation is seen to work well and to be of practical relevance.展开更多
AIM To evaluate the influence of creatinine methodology on the performance of chronic kidney disease(CKD)-Epidemiology Collaboration Group-calculated estimated glomerular filtration rate(CKD-EPI-eGFR) for CKD diagnosi...AIM To evaluate the influence of creatinine methodology on the performance of chronic kidney disease(CKD)-Epidemiology Collaboration Group-calculated estimated glomerular filtration rate(CKD-EPI-eGFR) for CKD diagnosis/staging in a large cohort of diabetic patients. METHODS Fasting blood samples were taken from diabetic patients attending our clinic for their regular annual examination, including laboratory measurement of serum creatinine and eGFR.RESULTS Our results indicated an overall excellent agreement in CKD staging(kappa = 0.918) between the Jaffé serum creatinine-and enzymatic serum creatinine-based CKDEPI-eGFR, with 9% of discordant cases. As compared to the enzymatic creatinine, the majority of discordances(8%) were positive, i.e., associated with the more advanced CKD stage re-classification, whereas only 1% of cases were negatively discordant if Jaffé creatinine was used for eGFR calculation. A minor proportion of the discordant cases(3.5%) were re-classified into clinically relevant CKD stage indicating mildly to moderately decreased kidney function(< 60 m L/min per 1.73 m^2). Significant acute and chronic hyperglycaemia, assessedas plasma glucose and Hb A1 c levels far above the recommended glycaemic goals, was associated with positively discordant cases. Due to a very low frequency, positive discordance is not likely to present a great burden for the health-care providers, while intensified medical care may actually be beneficial for the small number of discordant patients. On the other hand, a very low proportion of negatively discordant cases(1%) at the 60 m L/min per 1.73 m^2 eGFR level indicate a negligible possibility to miss the CKD diagnosis, which could be the most prominent clinical problem affecting patient care, considering high risk of CKD for adverse patient outcomes. CONCLUSION This study indicate that compensated Jaffé creatinine procedure, in spite of the glucose-dependent bias, is not inferior to enzymatic creatinine in CKD diagnosis/staging and therefore may provide a reliable and cost-effective tool for the renal function assessment in diabetic patients.展开更多
The dependency of the steady-state yaw rate model on vehicle weight and its distribution is studied in this paper. A speed-dependent adjustment of the yaw rate model is proposed to reduce the yaw rate estimation error...The dependency of the steady-state yaw rate model on vehicle weight and its distribution is studied in this paper. A speed-dependent adjustment of the yaw rate model is proposed to reduce the yaw rate estimation error. This new methodology allows the calibration engineer to minimize the yaw rate estimation error caused by the different weight conditions without going through the calibration process multiple times. It is expected that this modified yaw rate model will improve the performance of Electronic Stability Control (ESC) systems such as response time and robustness.展开更多
Achievable rate (AR) is significant to communications. As to multi-input multi-output (MIMO) digital transmissions with finite alphabets inputs, which greatly improve the performance of communications, it seems rather...Achievable rate (AR) is significant to communications. As to multi-input multi-output (MIMO) digital transmissions with finite alphabets inputs, which greatly improve the performance of communications, it seems rather difficult to calculate accurate AR. Here we propose an estimation of con-siderable accuracy and low complexity, based on Euclidean measure matrix for given channel states and constellations. The main contribution is explicit expression, non-constraints to MIMO schemes and channel states and constellations, and controllable estimating gap. Numerical results show that the proposition is able to achieve enough accurate AR computation. In addition the estimating gap given by theoretical deduction is well agreed.展开更多
Nelson-Siegel model ( NS model) and 2 extended NS models were compared by using daily interbank government bond data Based on the grouping of bonds according to the residual term to maturity, the empirical research ...Nelson-Siegel model ( NS model) and 2 extended NS models were compared by using daily interbank government bond data Based on the grouping of bonds according to the residual term to maturity, the empirical research proceeded with in-sample and outof-sample tests. The results show that the 3 models are almost equivalent in estimating interbank term structure of interest rates. Within the term to maturities between 0 and 7 years, the gap of the absolute errors of the 3 models between in-sample and out-of-sample is smRller than 0.2 Yuan, and the absolute values of the in-sample and out-of-sample errors are smaller than 0. 1 Yuan, so the estimation is credible. Within the term to maturities between 7 and 20 years, the gap of the absolute errors of the 3 models between in-sample and out-of-sample is larger than 0.4 Yuan, and the absolute values of the in-sample and out-of-sample errors are larger than 1.0 Yuan, so the estimation is incredible.展开更多
A new compensation method for angular rate estimation of non-gyro inertial measurement unit (NGIMU) is proposed in terms of the existence of aecelerometer mounting error, which seriously affects the precision of nav...A new compensation method for angular rate estimation of non-gyro inertial measurement unit (NGIMU) is proposed in terms of the existence of aecelerometer mounting error, which seriously affects the precision of navigation parameter estimation. Using the accelerometer output error function, the algorithm compensates the posture parameters in the traditional algorithm of angular rate estimation to reduce the accelerometer mounting error. According to the traditional aceelerometer configurations, a novel nine-accelerometer confi-guration of NGIMU is presented with its mathematic model constructed. The semi-hardware simulations of the proposed algorithm are investigated based on the presented NGIMU configuration, and the results show the effectivity of the new algorithm.展开更多
Aging is an inevitable process that is usually measured by chronological age,with people aged 65 and over being defined as"older individuals".There is disagreement in the current scientific literature regard...Aging is an inevitable process that is usually measured by chronological age,with people aged 65 and over being defined as"older individuals".There is disagreement in the current scientific literature regarding the best methods to estimate glomerular filtration rate(eGFR)in older adults.Several studies suggest the use of an age-adjusted definition to improve accuracy and avoid overdiagnosis.In contrast,some researchers argue that such changes could complicate the classification of chronic kidney disease(CKD).Several formulas,including the Modification of Diet in Renal Disease,CKD-Epidemiology Collaboration,and Cockcroft-Gault equations,are used to estimate eGFR.However,each of these formulas has significant limitations when applied to older adults,primarily due to sarcopenia and malnutrition,which greatly affect both muscle mass and creatinine levels.Alternative formulas,such as the Berlin Initiative Study and the Full Age Spectrum equations,provide more accurate estimates of values for older adults by accounting for age-related physiological changes.In frail older adults,the use of cystatin C leads to better eGFR calculations to assess renal function.Accurate eGFR measurements improve the health of older patients by enabling better medication dosing.A thorough approach that includes multiple calibrated diagnostic methods and a detailed geriatric assessment is necessary for the effective management of kidney disease and other age-related conditions in older adults.展开更多
Digital pathology is a major revolution in pathology and is changing the clinical routine for pathologists. We work on providing a computer aided diagnosis system that automatically and robustly provides the pathologi...Digital pathology is a major revolution in pathology and is changing the clinical routine for pathologists. We work on providing a computer aided diagnosis system that automatically and robustly provides the pathologist with a second opinion for many diagnosis tasks. However, inter-observer variability prevents thorough validation of any proposed technique for any specific problems. In this work, we study the variability and reliability of proliferation rate estimation from digital pathology images for breast cancer proliferation rate estimation. We also study the robustness of our recently proposed method CAD system for PRE estimation. Three statistical significance tests showed that our automated CAD system was as reliable as the expert pathologist in both brown and blue nuclei estimation on a dataset of 100 images.展开更多
This paper presents derivation of a priori error estimates and convergence rates of finite element processes for boundary value problems (BVPs) described by self adjoint, non-self adjoint, and nonlinear differential o...This paper presents derivation of a priori error estimates and convergence rates of finite element processes for boundary value problems (BVPs) described by self adjoint, non-self adjoint, and nonlinear differential operators. A posteriori error estimates are discussed in context with local approximations in higher order scalar product spaces. A posteriori error computational framework (without the knowledge of theoretical solution) is presented for all BVPs regardless of the method of approximation employed in constructing the integral form. This enables computations of local errors as well as the global errors in the computed finite element solutions. The two most significant and essential aspects of the research presented in this paper that enable all of the features described above are: 1) ensuring variational consistency of the integral form(s) resulting from the methods of approximation for self adjoint, non-self adjoint, and nonlinear differential operators and 2) choosing local approximations for the elements of a discretization in a subspace of a higher order scalar product space that is minimally conforming, hence ensuring desired global differentiability of the approximations over the discretizations. It is shown that when the theoretical solution of a BVP is analytic, the a priori error estimate (in the asymptotic range, discussed in a later section of the paper) is independent of the method of approximation or the nature of the differential operator provided the resulting integral form is variationally consistent. Thus, the finite element processes utilizing integral forms based on different methods of approximation but resulting in VC integral forms result in the same a priori error estimate and convergence rate. It is shown that a variationally consistent (VC) integral form has best approximation property in some norm, conversely an integral form with best approximation property in some norm is variationally consistent. That is best approximation property of the integral form and the VC of the integral form is equivalent, one cannot exist without the other, hence can be used interchangeably. Dimensional model problems consisting of diffusion equation, convection-diffusion equation, and Burgers equation described by self adjoint, non-self adjoint, and nonlinear differential operators are considered to present extensive numerical studies using Galerkin method with weak form (GM/WF) and least squares process (LSP) to determine computed convergence rates of various error norms and present comparisons with the theoretical convergence rates.展开更多
基金Supported by State Tobacco Monopoly Administration Project "National Survey of Pests in Tobacco" (110200902065)Yunnan Tobacco Monopoly Bureau Technology Project " Investigation of Tobacco Pests in Yunnan Province" (2010YN19)~~
文摘[Objective] The paper was to study the effect of tobacco blown spot on the yield and output value of tobacco leaf.[Method]The upper,middle and lower leaves in tobacco plant were selected during the harvest period of tobacco to carry out loss rate estimation of yield and output value of tobacco leaf caused by different disease levels of brown spot.Regression correlation analysis was also conducted.[Result]The disease levels of brown spot had extremely significant strong negative correlation with single leaf weight of tobacco leaf,and it had extremely significant strong positive correlation with the loss rate of single leaf weight.The increase speed of loss rate of single leaf weight of middle and upper leaves was obviously faster than that of lower leaves.The loss rates of single leaf weight of upper,middle and lower leaves were 3.18%-28.95%,3.43%-28.88% and 10.07%-26.90%,respectively.The higher the disease level of blown spot was,the lower the yield and output value of tobacco leaf was,and the corresponding loss rate was also higher.Correlation analysis showed that the disease level of blown spot had extremely significant strong negative correlation with the yield and output value of tobacco leaf,and it had extremely significant strong positive correlation with the loss rate of yield and output value.The negative impact of blown spot on the output value of tobacco leaf was far greater than that on the yield.The highest loss rate of the yield of tobacco leaf was 28.56%,while the highest loss rate of output value reached 89.67%.[Conclusion] The study provided theoretical basis for accurately holding the critical period for the control of blown spot,thus reducing the damage on tobacco leaf and improving the output value of tobacco leaf.
基金Supported by the National Natural Science Foundation of China (No. 60972061,60972062,and 61032004)the National High Technology Research and Development Program of China ("863" Program) (No. 2008AA12A204)the Natural Science Foundation of Jiangsu Province(No. BK2009060)
文摘As the raised cosine shaping filter is often employed in practical satellite communication system,the envelope fluctuation at the symbol transition point is decreased which leads to the failure of the common wavelet algorithm under low SNR.Accordingly,a method of blind symbol rate estimation using signal preprocessing and Haar wavelet is proposed in this paper.Firstly,the effect of filter shaping can be reduced by the signal preprocessing.Then,the optimal scale factor is searched and the signal is processed and analyzed by the Haar wavelet transform.Finally,the symbol rate line is extracted and a nonlinear filter method is inducted for improving the estimation performance.Theoretical analysis and computer simulation show the efficiency of the proposed algorithm under low SNR and small roll-off factor.
基金funded by the Artificial Intelligence Technology Project of Xi’an Science and Technology Bureau in China(No.21RGZN0014)。
文摘Accurate insight into the heat generation rate(HGR) of lithium-ion batteries(LIBs) is one of key issues for battery management systems to formulate thermal safety warning strategies in advance.For this reason,this paper proposes a novel physics-informed neural network(PINN) approach for HGR estimation of LIBs under various driving conditions.Specifically,a single particle model with thermodynamics(SPMT) is first constructed for extracting the critical physical knowledge related with battery HGR.Subsequently,the surface concentrations of positive and negative electrodes in battery SPMT model are integrated into the bidirectional long short-term memory(BiLSTM) networks as physical information.And combined with other feature variables,a novel PINN approach to achieve HGR estimation of LIBs with higher accuracy is constituted.Additionally,some critical hyperparameters of BiLSTM used in PINN approach are determined through Bayesian optimization algorithm(BOA) and the results of BOA-based BiLSTM are compared with other traditional BiLSTM/LSTM networks.Eventually,combined with the HGR data generated from the validated virtual battery,it is proved that the proposed approach can well predict the battery HGR under the dynamic stress test(DST) and worldwide light vehicles test procedure(WLTP),the mean absolute error under DST is 0.542 kW/m^(3),and the root mean square error under WLTP is1.428 kW/m^(3)at 25℃.Lastly,the investigation results of this paper also show a new perspective in the application of the PINN approach in battery HGR estimation.
文摘It has been shown that remote monitoring of pulmonary activity can be achieved using ultra-wideband (UWB) systems, which shows promise in home healthcare,rescue,and security applications.In this paper,we first present a multi-ray propagation model for UWB signal,which is traveling through the human thorax and is reflected on the air/dry-skin/fat/muscle interfaces,A geometry-based statistical channel model is then developed for simulating the reception of UWB signals in the indoor propagation environment.This model enables replication of time-varying multipath profiles due to the displacement of a human chest.Subsequently, a UWB distributed cognitive radar system (UWB-DCRS) is developed for the robust detection of chest cavity motion and the accurate estimation of respiration rate.The analytical framework can serve as a basis in the planning and evaluation of future rheasurement programs.We also provide a case study on how the antenna beamwidth affects the estimation of respiration rate based on the proposed propagation models and system architecture.
文摘Estimation of density and hazard rate is very important to the reliability analysis of a system. In order to estimate the density and hazard rate of a hazard rate monotonously decreasing system, a new nonparametric estimator is put forward. The estimator is based on the kernel function method and optimum algorithm. Numerical experiment shows that the method is accurate enough and can be used in many cases.
基金The National Key R&D Program,No.2018YFA0605603National Natural Science Foundation of China,No.41575003。
文摘The surface air temperature lapse rate(SATLR)plays a key role in the hydrological,glacial and ecological modeling,the regional downscaling,and the reconstruction of high-resolution surface air temperature.However,how to accurately estimate the SATLR in the regions with complex terrain and climatic condition has been a great challenge for researchers.The geographically weighted regression(GWR)model was applied in this paper to estimate the SATLR in China’s mainland,and then the assessment and validation for the GWR model were made.The spatial pattern of regression residuals which was identified by Moran’s Index indicated that the GWR model was broadly reasonable for the estimation of SATLR.The small mean absolute error(MAE)in all months indicated that the GWR model had a strong predictive ability for the surface air temperature.The comparison with previous studies for the seasonal mean SATLR further evidenced the accuracy of the estimation.Therefore,the GWR method has potential application for estimating the SATLR in a large region with complex terrain and climatic condition.
文摘In this paper, we construct the E·B estimation for parameter function of one-side truncated distribution under NA samples. Also, we obtain its convergence rate at O(n-q), where q is approaching 1/2.
文摘In this paper, we define the Weibull kernel and use it to nonparametric estimation of the probability density function (pdf) and the hazard rate function for independent and identically distributed (iid) data. The bias, variance and the optimal bandwidth of the proposed estimator are investigated. Moreover, the asymptotic normality of the proposed estimator is investigated. The performance of the proposed estimator is tested using simulation study and real data.
基金Supported by the National Natural Science Foundation of China under Grant Nos.U1304613 and 11204379
文摘The goal of quantum key distribution(QKD) is to generate secret key shared between two distant players,Alice and Bob. We present the connection between sampling rate and erroneous judgment probability when estimating error rate with random sampling method, and propose a method to compute optimal sampling rate, which can maximize final secure key generation rate. These results can be applied to choose the optimal sampling rate and improve the performance of QKD system with finite resources.
基金Project supported by National Natural Science Foundation of China (Grant No, 20406016)
文摘The application of Monte Carlo method in estimating rate constants for polymerization was described, A general program for Monte Carlo simulation was determined first according to the elementary reactions, after which the rate constants could be automatically adjusted and optimized through comparing of experimental and simulated data with an error expression that meeted a given minimum criterion. Such a process made the rate constants to be estimated without kinetic model in advance. The technique was applied to estimate the rate constants of the bulk polymerization of styrene catalyzed by the rare earth catalyst. The esthnated results showed the Monte Carlo method was feasible and effective for estimating rate constants in polymerization engineering.
文摘The optimality of a density estimation on Besov spaces Bsr,q(R) for the Lp risk was established by Donoho, Johnstone, Kerkyacharian and Picard (“Density estimation by wavelet thresholding,” The Annals of Statistics, Vol. 24, No. 2, 1996, pp. 508-539.). To show the lower bound of optimal rates of convergence Rn(Bsr,q, p), they use Korostelev and Assouad lemmas. However, the conditions of those two lemmas are difficult to be verified. This paper aims to give another proof for that bound by using Fano’s Lemma, which looks a little simpler. In addition, our method can be used in many other statistical models for lower bounds of estimations.
文摘In a survival analysis context, we suggest a new method to estimate the piecewise constant hazard rate model. The method provides an automatic procedure to find the number and location of cut points and to estimate the hazard on each cut interval. Estimation is performed through a penalized likelihood using an adaptive ridge procedure. A bootstrap procedure is proposed in order to derive valid statistical inference taking both into account the variability of the estimate and the variability in the choice of the cut points. The new method is applied both to simulated data and to the Mayo Clinic trial on primary biliary cirrhosis. The algorithm implementation is seen to work well and to be of practical relevance.
文摘AIM To evaluate the influence of creatinine methodology on the performance of chronic kidney disease(CKD)-Epidemiology Collaboration Group-calculated estimated glomerular filtration rate(CKD-EPI-eGFR) for CKD diagnosis/staging in a large cohort of diabetic patients. METHODS Fasting blood samples were taken from diabetic patients attending our clinic for their regular annual examination, including laboratory measurement of serum creatinine and eGFR.RESULTS Our results indicated an overall excellent agreement in CKD staging(kappa = 0.918) between the Jaffé serum creatinine-and enzymatic serum creatinine-based CKDEPI-eGFR, with 9% of discordant cases. As compared to the enzymatic creatinine, the majority of discordances(8%) were positive, i.e., associated with the more advanced CKD stage re-classification, whereas only 1% of cases were negatively discordant if Jaffé creatinine was used for eGFR calculation. A minor proportion of the discordant cases(3.5%) were re-classified into clinically relevant CKD stage indicating mildly to moderately decreased kidney function(< 60 m L/min per 1.73 m^2). Significant acute and chronic hyperglycaemia, assessedas plasma glucose and Hb A1 c levels far above the recommended glycaemic goals, was associated with positively discordant cases. Due to a very low frequency, positive discordance is not likely to present a great burden for the health-care providers, while intensified medical care may actually be beneficial for the small number of discordant patients. On the other hand, a very low proportion of negatively discordant cases(1%) at the 60 m L/min per 1.73 m^2 eGFR level indicate a negligible possibility to miss the CKD diagnosis, which could be the most prominent clinical problem affecting patient care, considering high risk of CKD for adverse patient outcomes. CONCLUSION This study indicate that compensated Jaffé creatinine procedure, in spite of the glucose-dependent bias, is not inferior to enzymatic creatinine in CKD diagnosis/staging and therefore may provide a reliable and cost-effective tool for the renal function assessment in diabetic patients.
文摘The dependency of the steady-state yaw rate model on vehicle weight and its distribution is studied in this paper. A speed-dependent adjustment of the yaw rate model is proposed to reduce the yaw rate estimation error. This new methodology allows the calibration engineer to minimize the yaw rate estimation error caused by the different weight conditions without going through the calibration process multiple times. It is expected that this modified yaw rate model will improve the performance of Electronic Stability Control (ESC) systems such as response time and robustness.
文摘Achievable rate (AR) is significant to communications. As to multi-input multi-output (MIMO) digital transmissions with finite alphabets inputs, which greatly improve the performance of communications, it seems rather difficult to calculate accurate AR. Here we propose an estimation of con-siderable accuracy and low complexity, based on Euclidean measure matrix for given channel states and constellations. The main contribution is explicit expression, non-constraints to MIMO schemes and channel states and constellations, and controllable estimating gap. Numerical results show that the proposition is able to achieve enough accurate AR computation. In addition the estimating gap given by theoretical deduction is well agreed.
文摘Nelson-Siegel model ( NS model) and 2 extended NS models were compared by using daily interbank government bond data Based on the grouping of bonds according to the residual term to maturity, the empirical research proceeded with in-sample and outof-sample tests. The results show that the 3 models are almost equivalent in estimating interbank term structure of interest rates. Within the term to maturities between 0 and 7 years, the gap of the absolute errors of the 3 models between in-sample and out-of-sample is smRller than 0.2 Yuan, and the absolute values of the in-sample and out-of-sample errors are smaller than 0. 1 Yuan, so the estimation is credible. Within the term to maturities between 7 and 20 years, the gap of the absolute errors of the 3 models between in-sample and out-of-sample is larger than 0.4 Yuan, and the absolute values of the in-sample and out-of-sample errors are larger than 1.0 Yuan, so the estimation is incredible.
基金Sponsored by the National Natural Science Foundation of China (Grant No.60901042)the Natural Science Foundation of Heilongjiang Province(Grant No.F2007-08)
文摘A new compensation method for angular rate estimation of non-gyro inertial measurement unit (NGIMU) is proposed in terms of the existence of aecelerometer mounting error, which seriously affects the precision of navigation parameter estimation. Using the accelerometer output error function, the algorithm compensates the posture parameters in the traditional algorithm of angular rate estimation to reduce the accelerometer mounting error. According to the traditional aceelerometer configurations, a novel nine-accelerometer confi-guration of NGIMU is presented with its mathematic model constructed. The semi-hardware simulations of the proposed algorithm are investigated based on the presented NGIMU configuration, and the results show the effectivity of the new algorithm.
文摘Aging is an inevitable process that is usually measured by chronological age,with people aged 65 and over being defined as"older individuals".There is disagreement in the current scientific literature regarding the best methods to estimate glomerular filtration rate(eGFR)in older adults.Several studies suggest the use of an age-adjusted definition to improve accuracy and avoid overdiagnosis.In contrast,some researchers argue that such changes could complicate the classification of chronic kidney disease(CKD).Several formulas,including the Modification of Diet in Renal Disease,CKD-Epidemiology Collaboration,and Cockcroft-Gault equations,are used to estimate eGFR.However,each of these formulas has significant limitations when applied to older adults,primarily due to sarcopenia and malnutrition,which greatly affect both muscle mass and creatinine levels.Alternative formulas,such as the Berlin Initiative Study and the Full Age Spectrum equations,provide more accurate estimates of values for older adults by accounting for age-related physiological changes.In frail older adults,the use of cystatin C leads to better eGFR calculations to assess renal function.Accurate eGFR measurements improve the health of older patients by enabling better medication dosing.A thorough approach that includes multiple calibrated diagnostic methods and a detailed geriatric assessment is necessary for the effective management of kidney disease and other age-related conditions in older adults.
文摘Digital pathology is a major revolution in pathology and is changing the clinical routine for pathologists. We work on providing a computer aided diagnosis system that automatically and robustly provides the pathologist with a second opinion for many diagnosis tasks. However, inter-observer variability prevents thorough validation of any proposed technique for any specific problems. In this work, we study the variability and reliability of proliferation rate estimation from digital pathology images for breast cancer proliferation rate estimation. We also study the robustness of our recently proposed method CAD system for PRE estimation. Three statistical significance tests showed that our automated CAD system was as reliable as the expert pathologist in both brown and blue nuclei estimation on a dataset of 100 images.
文摘This paper presents derivation of a priori error estimates and convergence rates of finite element processes for boundary value problems (BVPs) described by self adjoint, non-self adjoint, and nonlinear differential operators. A posteriori error estimates are discussed in context with local approximations in higher order scalar product spaces. A posteriori error computational framework (without the knowledge of theoretical solution) is presented for all BVPs regardless of the method of approximation employed in constructing the integral form. This enables computations of local errors as well as the global errors in the computed finite element solutions. The two most significant and essential aspects of the research presented in this paper that enable all of the features described above are: 1) ensuring variational consistency of the integral form(s) resulting from the methods of approximation for self adjoint, non-self adjoint, and nonlinear differential operators and 2) choosing local approximations for the elements of a discretization in a subspace of a higher order scalar product space that is minimally conforming, hence ensuring desired global differentiability of the approximations over the discretizations. It is shown that when the theoretical solution of a BVP is analytic, the a priori error estimate (in the asymptotic range, discussed in a later section of the paper) is independent of the method of approximation or the nature of the differential operator provided the resulting integral form is variationally consistent. Thus, the finite element processes utilizing integral forms based on different methods of approximation but resulting in VC integral forms result in the same a priori error estimate and convergence rate. It is shown that a variationally consistent (VC) integral form has best approximation property in some norm, conversely an integral form with best approximation property in some norm is variationally consistent. That is best approximation property of the integral form and the VC of the integral form is equivalent, one cannot exist without the other, hence can be used interchangeably. Dimensional model problems consisting of diffusion equation, convection-diffusion equation, and Burgers equation described by self adjoint, non-self adjoint, and nonlinear differential operators are considered to present extensive numerical studies using Galerkin method with weak form (GM/WF) and least squares process (LSP) to determine computed convergence rates of various error norms and present comparisons with the theoretical convergence rates.