In this paper, we discuss some characteristic properties of partial abstract data type (PADT) and show the diffrence between PADT and abstract data type (ADT) in specification of programming language. Finally, we clar...In this paper, we discuss some characteristic properties of partial abstract data type (PADT) and show the diffrence between PADT and abstract data type (ADT) in specification of programming language. Finally, we clarify that PADT is necessary in programming language description.展开更多
Boosted by a strong solar power market,the electricity grid is exposed to risk under an increasing share of fluctuant solar power.To increase the stability of the electricity grid,an accurate solar power forecast is n...Boosted by a strong solar power market,the electricity grid is exposed to risk under an increasing share of fluctuant solar power.To increase the stability of the electricity grid,an accurate solar power forecast is needed to evaluate such fluctuations.In terms of forecast,solar irradiance is the key factor of solar power generation,which is affected by atmospheric conditions,including surface meteorological variables and column integrated variables.These variables involve multiple numerical timeseries and images.However,few studies have focused on the processing method of multiple data types in an interhour direct normal irradiance(DNI)forecast.In this study,a framework for predicting the DNI for a 10-min time horizon was developed,which included the nondimensionalization of multiple data types and time-series,development of a forecast model,and transformation of the outputs.Several atmospheric variables were considered in the forecast framework,including the historical DNI,wind speed and direction,relative humidity time-series,and ground-based cloud images.Experiments were conducted to evaluate the performance of the forecast framework.The experimental results demonstrate that the proposed method performs well with a normalized mean bias error of 0.41%and a normalized root mean square error(n RMSE)of20.53%,and outperforms the persistent model with an improvement of 34%in the nRMSE.展开更多
During pre-clinical pharmacokinetic research, it is not easy to gather complete pharmacokinetic data in each animal. In some cases, an animal can only provide a single observation. Under this circumstance, it is not c...During pre-clinical pharmacokinetic research, it is not easy to gather complete pharmacokinetic data in each animal. In some cases, an animal can only provide a single observation. Under this circumstance, it is not clear how to utilize this data to estimate the pharmacokinetic parameters effectively. This study was aimed at comparing a new method to handle such single-observation-per-animal type data with the conventional method in estimating pharmacokinetic parameters. We assumed there were 15 animals within the study receiving a single dose by intravenous injection. Each animal provided one observation point. There were five time points in total, and each time point contained three measurements. The data were simulated with a one-compartment model with first-order elimination. The inter-individual variabilities (ⅡV) were set to 10%, 30% and 50% for both clearance (CL) and apparent volume of distribution (V). A proportional model was used to describe the residual error, which was also set to 10%, 30% and 50%. Two methods (conventional method and the finite msampling method) to handle with the simulated single-observation-per-animal type data in estimating pharmacokinetic parameters were compared. The conventional method (MI) estimated pharmacokinetic parameters directly with original data, i.e., single-observation-per-animal type data. The finite resampling method (M2) was to expand original data to a new dataset by resampling original data with all kinds of combinations by time. After resampling, each individual in the new dataset contained complete pharmacokinetic data, i.e., in this study, there were 243 (C3^1×C3^1×C3^1×C3^1×C3^1) kinds of possible combinations and each of them was a virtual animal. The study was simulated 100 times by the NONMEM software. According to the results, parameter estimates of CL and V by M2 based on the simulated dataset were closer to their true values, though there was a small difference among different combinations of ⅡVs and the residual errors. In general, M2 was less advantageous over M1 when the residual error increased. It was also influenced by the levels of ⅡV as higher levels of IIV could lead to a decrease in the advantage of M2. However, M2 had no ability to estimate the ⅡV of parameters, nor did M1. The finite resampling method could provide more reliable results compared to the conventional method in estimating pharmacokinetic parameters with single-observation-per-animal type data. Compared to the inter-individual variability, the results of estimation were mainly influenced by the residual error.展开更多
We use the latest baryon acoustic oscillation and Union 2.1 type Ia supernova data to test the cosmic opacity between different redshift regions without assuming any cosmological models. It is found that the universe ...We use the latest baryon acoustic oscillation and Union 2.1 type Ia supernova data to test the cosmic opacity between different redshift regions without assuming any cosmological models. It is found that the universe may be opaque between the redshift regions 0.35 0.44, 0.44 0.57 and 0.6-0.73 since the best fit values of cosmic opacity in these regions are positive, while a transparent universe is favored in the redshift region 0.57-0.63. However, in general, a transparent universe is still consistent with observations at the lo confidence level.展开更多
Exponentiated Generalized Weibull distribution is a probability distribution which generalizes the Weibull distribution introducing two more shapes parameters to best adjust the non-monotonic shape. The parameters of ...Exponentiated Generalized Weibull distribution is a probability distribution which generalizes the Weibull distribution introducing two more shapes parameters to best adjust the non-monotonic shape. The parameters of the new probability distribution function are estimated by the maximum likelihood method under progressive type II censored data via expectation maximization algorithm.展开更多
Type-I censoring mechanism arises when the number of units experiencing the event is random but the total duration of the study is fixed. There are a number of mathematical approaches developed to handle this type of ...Type-I censoring mechanism arises when the number of units experiencing the event is random but the total duration of the study is fixed. There are a number of mathematical approaches developed to handle this type of data. The purpose of the research was to estimate the three parameters of the Frechet distribution via the frequentist Maximum Likelihood and the Bayesian Estimators. In this paper, the maximum likelihood method (MLE) is not available of the three parameters in the closed forms;therefore, it was solved by the numerical methods. Similarly, the Bayesian estimators are implemented using Jeffreys and gamma priors with two loss functions, which are: squared error loss function and Linear Exponential Loss Function (LINEX). The parameters of the Frechet distribution via Bayesian cannot be obtained analytically and therefore Markov Chain Monte Carlo is used, where the full conditional distribution for the three parameters is obtained via Metropolis-Hastings algorithm. Comparisons of the estimators are obtained using Mean Square Errors (MSE) to determine the best estimator of the three parameters of the Frechet distribution. The results show that the Bayesian estimation under Linear Exponential Loss Function based on Type-I censored data is a better estimator for all the parameter estimates when the value of the loss parameter is positive.展开更多
To improve high quality and/or retain achieved high quality of an academic program, time to time evaluation for quality of each covered course is often an integrated aspect considered in reputed institutions, however,...To improve high quality and/or retain achieved high quality of an academic program, time to time evaluation for quality of each covered course is often an integrated aspect considered in reputed institutions, however, there has been little effort regarding humanities courses. This research article deals with analysis of evaluation data collected regarding humanities course from a College of Commerce & Economics, Mumbai, Maharashtra, India, on Likert type items. Appropriateness of one parametric measure and three non-parametric measures are discussed and used in this regard which could provide useful clues for educational policy planners. Keeping in view of the analytical results using these four measures, regardless of the threshold regarding satisfaction among students, overall performance of almost every subject has been un-satisfactory. There is a need to make a focused approach to take every course at the level of high performance. The inconsistency noticed under every threshold further revealed that under such poorly performing subjects globally, one needs to analyze merely at the global level item. Once the global level analysis reveals high performance of a course, then only item specific analysis may need to be focused to find out the items requiring further improvements.展开更多
This study aimed at investigating the characteristics of table and graph that people perceive and the data types which people consider the two displays are most appropriate for. Participants in this survey were 195 te...This study aimed at investigating the characteristics of table and graph that people perceive and the data types which people consider the two displays are most appropriate for. Participants in this survey were 195 teachers and undergraduates from four universities in Beijing. The results showed people's different attitudes towards the two forms of display.展开更多
This paper focuses on the type synthesis of two degree-of-freedom(2-DoF) rotational parallel mechanisms(RPMs) that would be applied as mechanisms actuating the inter-satellite link antenna. Based upon Lie group theory...This paper focuses on the type synthesis of two degree-of-freedom(2-DoF) rotational parallel mechanisms(RPMs) that would be applied as mechanisms actuating the inter-satellite link antenna. Based upon Lie group theory, two steps are necessary to synthesize 2-DoF RPMs except describing the continuous desired motions of the moving platform. They are respectively generation of required open-loop limbs between the fixed base and the moving platform and definition of assembly principles for these limbs. Firstly, all available displacement subgroups or submanifolds are obtained readily according to that the continuous motion of the moving platform is the intersection of those of all open-loop limbs. These subgroups or submanifolds are used to generate all the topology structures of limbs. By describing the characteristics of the displacement subgroups and submanifolds intuitively through employing simple geometrical symbols, their intersection and union operations can be carried out easily. Based on this, the assembly principles of two types are defined to synthesize all 2-DoF RPMs using obtained limbs. Finally, two novel categories of 2-DoF RPMs are provided by introducing a circular track and an articulated rotating platform,respectively. This work can lay the foundations for analysis and optimal design of 2-DoF RPMs that actuate the inter-satellite link antenna.展开更多
Enrichment analysis methods, e.g., gene set enrichment analysis, represent one class of important bio- informatical resources for mining patterns in biomedical datasets. However, tools for inferring patterns and rules...Enrichment analysis methods, e.g., gene set enrichment analysis, represent one class of important bio- informatical resources for mining patterns in biomedical datasets. However, tools for inferring patterns and rules of a list of drugs are limited. In this study, we developed a web-based tool, DrugPattern, for drug set enrichment analysis. We first collected and curated 7019 drug sets, including indications, adverse reactions, targets, pathways, etc. from public databases. For a list of interested drugs, DrugPat- tern then evaluates the significance of the enrichment of these drugs in each of the 7019 drug sets. To validate DrugPattern, we employed it for the prediction of the effects of oxidized low-density lipoprotein (oxLDL), a factor expected to be deleterious. We predicted that oxLDL has beneficial effects on some diseases, most of which were supported by evidence in the literature. Because DrugPattern predicted the potential beneficial effects of oxLDL in type 2 diabetes (T2D), animal experiments were then performed to further verify this prediction. As a result, the experimental evidences validated the DrugPattern prediction that oxLDL indeed has beneficial effects on T2D in the case of energy restriction. These data confirmed the prediction accuracy of our approach and revealed unexpected protective roles for oxLDL in various diseases. This study provides a tool to infer patterns and rules in biomedical datasets based on drug set enrichment analysis.展开更多
In order to design a more efficient and more convenient temperature acquisition system, an approach combining USB data acquisition card with K type thermocouple temperature sensor is proposed under the circumstance of...In order to design a more efficient and more convenient temperature acquisition system, an approach combining USB data acquisition card with K type thermocouple temperature sensor is proposed under the circumstance of LabVIEW 2012 programming software. Firstly, the LabVIEW 2012 programming software is used to complete a temperature acquisition control program. Secondly, K type thermocouple temperature sensor is employed to transfer the temperature information. Thirdly, Then the USB data acquisition card can collect the voltage of K type thermocouple temperature sensor and convert it to a temperature scale. And, the simplification of experimental procedure can reduce the cost of development greatly. Finally, the experimental results illustrate that the range of measurement temperature is more wide and the temperature scale is more accurate.展开更多
基金The Project Supported by National Natural Science Foundation of China
文摘In this paper, we discuss some characteristic properties of partial abstract data type (PADT) and show the diffrence between PADT and abstract data type (ADT) in specification of programming language. Finally, we clarify that PADT is necessary in programming language description.
基金supported by the National Key Research and Development Program of China(No.2018YFB1500803)National Natural Science Foundation of China(No.61773118,No.61703100)Fundamental Research Funds for Central Universities.
文摘Boosted by a strong solar power market,the electricity grid is exposed to risk under an increasing share of fluctuant solar power.To increase the stability of the electricity grid,an accurate solar power forecast is needed to evaluate such fluctuations.In terms of forecast,solar irradiance is the key factor of solar power generation,which is affected by atmospheric conditions,including surface meteorological variables and column integrated variables.These variables involve multiple numerical timeseries and images.However,few studies have focused on the processing method of multiple data types in an interhour direct normal irradiance(DNI)forecast.In this study,a framework for predicting the DNI for a 10-min time horizon was developed,which included the nondimensionalization of multiple data types and time-series,development of a forecast model,and transformation of the outputs.Several atmospheric variables were considered in the forecast framework,including the historical DNI,wind speed and direction,relative humidity time-series,and ground-based cloud images.Experiments were conducted to evaluate the performance of the forecast framework.The experimental results demonstrate that the proposed method performs well with a normalized mean bias error of 0.41%and a normalized root mean square error(n RMSE)of20.53%,and outperforms the persistent model with an improvement of 34%in the nRMSE.
文摘During pre-clinical pharmacokinetic research, it is not easy to gather complete pharmacokinetic data in each animal. In some cases, an animal can only provide a single observation. Under this circumstance, it is not clear how to utilize this data to estimate the pharmacokinetic parameters effectively. This study was aimed at comparing a new method to handle such single-observation-per-animal type data with the conventional method in estimating pharmacokinetic parameters. We assumed there were 15 animals within the study receiving a single dose by intravenous injection. Each animal provided one observation point. There were five time points in total, and each time point contained three measurements. The data were simulated with a one-compartment model with first-order elimination. The inter-individual variabilities (ⅡV) were set to 10%, 30% and 50% for both clearance (CL) and apparent volume of distribution (V). A proportional model was used to describe the residual error, which was also set to 10%, 30% and 50%. Two methods (conventional method and the finite msampling method) to handle with the simulated single-observation-per-animal type data in estimating pharmacokinetic parameters were compared. The conventional method (MI) estimated pharmacokinetic parameters directly with original data, i.e., single-observation-per-animal type data. The finite resampling method (M2) was to expand original data to a new dataset by resampling original data with all kinds of combinations by time. After resampling, each individual in the new dataset contained complete pharmacokinetic data, i.e., in this study, there were 243 (C3^1×C3^1×C3^1×C3^1×C3^1) kinds of possible combinations and each of them was a virtual animal. The study was simulated 100 times by the NONMEM software. According to the results, parameter estimates of CL and V by M2 based on the simulated dataset were closer to their true values, though there was a small difference among different combinations of ⅡVs and the residual errors. In general, M2 was less advantageous over M1 when the residual error increased. It was also influenced by the levels of ⅡV as higher levels of IIV could lead to a decrease in the advantage of M2. However, M2 had no ability to estimate the ⅡV of parameters, nor did M1. The finite resampling method could provide more reliable results compared to the conventional method in estimating pharmacokinetic parameters with single-observation-per-animal type data. Compared to the inter-individual variability, the results of estimation were mainly influenced by the residual error.
基金Supported by the National Natural Science Foundation of China under Grant Nos 11175093,11222545,11435006 and 11375092the K.C.Wong Magna Fund of Ningbo University
文摘We use the latest baryon acoustic oscillation and Union 2.1 type Ia supernova data to test the cosmic opacity between different redshift regions without assuming any cosmological models. It is found that the universe may be opaque between the redshift regions 0.35 0.44, 0.44 0.57 and 0.6-0.73 since the best fit values of cosmic opacity in these regions are positive, while a transparent universe is favored in the redshift region 0.57-0.63. However, in general, a transparent universe is still consistent with observations at the lo confidence level.
文摘Exponentiated Generalized Weibull distribution is a probability distribution which generalizes the Weibull distribution introducing two more shapes parameters to best adjust the non-monotonic shape. The parameters of the new probability distribution function are estimated by the maximum likelihood method under progressive type II censored data via expectation maximization algorithm.
文摘Type-I censoring mechanism arises when the number of units experiencing the event is random but the total duration of the study is fixed. There are a number of mathematical approaches developed to handle this type of data. The purpose of the research was to estimate the three parameters of the Frechet distribution via the frequentist Maximum Likelihood and the Bayesian Estimators. In this paper, the maximum likelihood method (MLE) is not available of the three parameters in the closed forms;therefore, it was solved by the numerical methods. Similarly, the Bayesian estimators are implemented using Jeffreys and gamma priors with two loss functions, which are: squared error loss function and Linear Exponential Loss Function (LINEX). The parameters of the Frechet distribution via Bayesian cannot be obtained analytically and therefore Markov Chain Monte Carlo is used, where the full conditional distribution for the three parameters is obtained via Metropolis-Hastings algorithm. Comparisons of the estimators are obtained using Mean Square Errors (MSE) to determine the best estimator of the three parameters of the Frechet distribution. The results show that the Bayesian estimation under Linear Exponential Loss Function based on Type-I censored data is a better estimator for all the parameter estimates when the value of the loss parameter is positive.
文摘To improve high quality and/or retain achieved high quality of an academic program, time to time evaluation for quality of each covered course is often an integrated aspect considered in reputed institutions, however, there has been little effort regarding humanities courses. This research article deals with analysis of evaluation data collected regarding humanities course from a College of Commerce & Economics, Mumbai, Maharashtra, India, on Likert type items. Appropriateness of one parametric measure and three non-parametric measures are discussed and used in this regard which could provide useful clues for educational policy planners. Keeping in view of the analytical results using these four measures, regardless of the threshold regarding satisfaction among students, overall performance of almost every subject has been un-satisfactory. There is a need to make a focused approach to take every course at the level of high performance. The inconsistency noticed under every threshold further revealed that under such poorly performing subjects globally, one needs to analyze merely at the global level item. Once the global level analysis reveals high performance of a course, then only item specific analysis may need to be focused to find out the items requiring further improvements.
基金Project supported partly by the National Basic Research Program (973) of China (No. 2002B312103)+2 种基金the National Natural Science Foundation of China (No. 3027466)the Chinese Academy of Sciences
文摘This study aimed at investigating the characteristics of table and graph that people perceive and the data types which people consider the two displays are most appropriate for. Participants in this survey were 195 teachers and undergraduates from four universities in Beijing. The results showed people's different attitudes towards the two forms of display.
基金supported by the National Natural Science Foundation of China (No. 51475321)Tianjin Research Program of Application Foundation and Advanced Technology (No. 15JCZDJC38900 and No. 16JCYBJC19300)
文摘This paper focuses on the type synthesis of two degree-of-freedom(2-DoF) rotational parallel mechanisms(RPMs) that would be applied as mechanisms actuating the inter-satellite link antenna. Based upon Lie group theory, two steps are necessary to synthesize 2-DoF RPMs except describing the continuous desired motions of the moving platform. They are respectively generation of required open-loop limbs between the fixed base and the moving platform and definition of assembly principles for these limbs. Firstly, all available displacement subgroups or submanifolds are obtained readily according to that the continuous motion of the moving platform is the intersection of those of all open-loop limbs. These subgroups or submanifolds are used to generate all the topology structures of limbs. By describing the characteristics of the displacement subgroups and submanifolds intuitively through employing simple geometrical symbols, their intersection and union operations can be carried out easily. Based on this, the assembly principles of two types are defined to synthesize all 2-DoF RPMs using obtained limbs. Finally, two novel categories of 2-DoF RPMs are provided by introducing a circular track and an articulated rotating platform,respectively. This work can lay the foundations for analysis and optimal design of 2-DoF RPMs that actuate the inter-satellite link antenna.
基金supported by the Special Project on Precision Medicine under the National Key R&D Program (2016YFC0903003 and 2017YFC0909600)the National Natural Science Foundation of China (Nos. 81670462 and 81422006 to Q.C.+1 种基金 81670748 and 81471035 to J.Y.)Beijing Natural Science Foundation (No.7171006 to J.Y.)
文摘Enrichment analysis methods, e.g., gene set enrichment analysis, represent one class of important bio- informatical resources for mining patterns in biomedical datasets. However, tools for inferring patterns and rules of a list of drugs are limited. In this study, we developed a web-based tool, DrugPattern, for drug set enrichment analysis. We first collected and curated 7019 drug sets, including indications, adverse reactions, targets, pathways, etc. from public databases. For a list of interested drugs, DrugPat- tern then evaluates the significance of the enrichment of these drugs in each of the 7019 drug sets. To validate DrugPattern, we employed it for the prediction of the effects of oxidized low-density lipoprotein (oxLDL), a factor expected to be deleterious. We predicted that oxLDL has beneficial effects on some diseases, most of which were supported by evidence in the literature. Because DrugPattern predicted the potential beneficial effects of oxLDL in type 2 diabetes (T2D), animal experiments were then performed to further verify this prediction. As a result, the experimental evidences validated the DrugPattern prediction that oxLDL indeed has beneficial effects on T2D in the case of energy restriction. These data confirmed the prediction accuracy of our approach and revealed unexpected protective roles for oxLDL in various diseases. This study provides a tool to infer patterns and rules in biomedical datasets based on drug set enrichment analysis.
文摘In order to design a more efficient and more convenient temperature acquisition system, an approach combining USB data acquisition card with K type thermocouple temperature sensor is proposed under the circumstance of LabVIEW 2012 programming software. Firstly, the LabVIEW 2012 programming software is used to complete a temperature acquisition control program. Secondly, K type thermocouple temperature sensor is employed to transfer the temperature information. Thirdly, Then the USB data acquisition card can collect the voltage of K type thermocouple temperature sensor and convert it to a temperature scale. And, the simplification of experimental procedure can reduce the cost of development greatly. Finally, the experimental results illustrate that the range of measurement temperature is more wide and the temperature scale is more accurate.