During pre-clinical pharmacokinetic research, it is not easy to gather complete pharmacokinetic data in each animal. In some cases, an animal can only provide a single observation. Under this circumstance, it is not c...During pre-clinical pharmacokinetic research, it is not easy to gather complete pharmacokinetic data in each animal. In some cases, an animal can only provide a single observation. Under this circumstance, it is not clear how to utilize this data to estimate the pharmacokinetic parameters effectively. This study was aimed at comparing a new method to handle such single-observation-per-animal type data with the conventional method in estimating pharmacokinetic parameters. We assumed there were 15 animals within the study receiving a single dose by intravenous injection. Each animal provided one observation point. There were five time points in total, and each time point contained three measurements. The data were simulated with a one-compartment model with first-order elimination. The inter-individual variabilities (ⅡV) were set to 10%, 30% and 50% for both clearance (CL) and apparent volume of distribution (V). A proportional model was used to describe the residual error, which was also set to 10%, 30% and 50%. Two methods (conventional method and the finite msampling method) to handle with the simulated single-observation-per-animal type data in estimating pharmacokinetic parameters were compared. The conventional method (MI) estimated pharmacokinetic parameters directly with original data, i.e., single-observation-per-animal type data. The finite resampling method (M2) was to expand original data to a new dataset by resampling original data with all kinds of combinations by time. After resampling, each individual in the new dataset contained complete pharmacokinetic data, i.e., in this study, there were 243 (C3^1×C3^1×C3^1×C3^1×C3^1) kinds of possible combinations and each of them was a virtual animal. The study was simulated 100 times by the NONMEM software. According to the results, parameter estimates of CL and V by M2 based on the simulated dataset were closer to their true values, though there was a small difference among different combinations of ⅡVs and the residual errors. In general, M2 was less advantageous over M1 when the residual error increased. It was also influenced by the levels of ⅡV as higher levels of IIV could lead to a decrease in the advantage of M2. However, M2 had no ability to estimate the ⅡV of parameters, nor did M1. The finite resampling method could provide more reliable results compared to the conventional method in estimating pharmacokinetic parameters with single-observation-per-animal type data. Compared to the inter-individual variability, the results of estimation were mainly influenced by the residual error.展开更多
We address the well-posedness of the 2D(Euler)–Boussinesq equations with zero viscosity and positive diffusivity in the polygonal-like domains with Yudovich’s type data,which gives a positive answer to part of the q...We address the well-posedness of the 2D(Euler)–Boussinesq equations with zero viscosity and positive diffusivity in the polygonal-like domains with Yudovich’s type data,which gives a positive answer to part of the questions raised in Lai(Arch Ration Mech Anal 199(3):739–760,2011).Our analysis on the the polygonallike domains essentially relies on the recent elliptic regularity results for such domains proved in Bardos et al.(J Math Anal Appl 407(1):69–89,2013)and Di Plinio(SIAM J Math Anal 47(1):159–178,2015).展开更多
Exponentiated Generalized Weibull distribution is a probability distribution which generalizes the Weibull distribution introducing two more shapes parameters to best adjust the non-monotonic shape. The parameters of ...Exponentiated Generalized Weibull distribution is a probability distribution which generalizes the Weibull distribution introducing two more shapes parameters to best adjust the non-monotonic shape. The parameters of the new probability distribution function are estimated by the maximum likelihood method under progressive type II censored data via expectation maximization algorithm.展开更多
We use the latest baryon acoustic oscillation and Union 2.1 type Ia supernova data to test the cosmic opacity between different redshift regions without assuming any cosmological models. It is found that the universe ...We use the latest baryon acoustic oscillation and Union 2.1 type Ia supernova data to test the cosmic opacity between different redshift regions without assuming any cosmological models. It is found that the universe may be opaque between the redshift regions 0.35 0.44, 0.44 0.57 and 0.6-0.73 since the best fit values of cosmic opacity in these regions are positive, while a transparent universe is favored in the redshift region 0.57-0.63. However, in general, a transparent universe is still consistent with observations at the lo confidence level.展开更多
In this paper, we discuss some characteristic properties of partial abstract data type (PADT) and show the diffrence between PADT and abstract data type (ADT) in specification of programming language. Finally, we clar...In this paper, we discuss some characteristic properties of partial abstract data type (PADT) and show the diffrence between PADT and abstract data type (ADT) in specification of programming language. Finally, we clarify that PADT is necessary in programming language description.展开更多
Type-I censoring mechanism arises when the number of units experiencing the event is random but the total duration of the study is fixed. There are a number of mathematical approaches developed to handle this type of ...Type-I censoring mechanism arises when the number of units experiencing the event is random but the total duration of the study is fixed. There are a number of mathematical approaches developed to handle this type of data. The purpose of the research was to estimate the three parameters of the Frechet distribution via the frequentist Maximum Likelihood and the Bayesian Estimators. In this paper, the maximum likelihood method (MLE) is not available of the three parameters in the closed forms;therefore, it was solved by the numerical methods. Similarly, the Bayesian estimators are implemented using Jeffreys and gamma priors with two loss functions, which are: squared error loss function and Linear Exponential Loss Function (LINEX). The parameters of the Frechet distribution via Bayesian cannot be obtained analytically and therefore Markov Chain Monte Carlo is used, where the full conditional distribution for the three parameters is obtained via Metropolis-Hastings algorithm. Comparisons of the estimators are obtained using Mean Square Errors (MSE) to determine the best estimator of the three parameters of the Frechet distribution. The results show that the Bayesian estimation under Linear Exponential Loss Function based on Type-I censored data is a better estimator for all the parameter estimates when the value of the loss parameter is positive.展开更多
To improve high quality and/or retain achieved high quality of an academic program, time to time evaluation for quality of each covered course is often an integrated aspect considered in reputed institutions, however,...To improve high quality and/or retain achieved high quality of an academic program, time to time evaluation for quality of each covered course is often an integrated aspect considered in reputed institutions, however, there has been little effort regarding humanities courses. This research article deals with analysis of evaluation data collected regarding humanities course from a College of Commerce & Economics, Mumbai, Maharashtra, India, on Likert type items. Appropriateness of one parametric measure and three non-parametric measures are discussed and used in this regard which could provide useful clues for educational policy planners. Keeping in view of the analytical results using these four measures, regardless of the threshold regarding satisfaction among students, overall performance of almost every subject has been un-satisfactory. There is a need to make a focused approach to take every course at the level of high performance. The inconsistency noticed under every threshold further revealed that under such poorly performing subjects globally, one needs to analyze merely at the global level item. Once the global level analysis reveals high performance of a course, then only item specific analysis may need to be focused to find out the items requiring further improvements.展开更多
This study aimed at investigating the characteristics of table and graph that people perceive and the data types which people consider the two displays are most appropriate for. Participants in this survey were 195 te...This study aimed at investigating the characteristics of table and graph that people perceive and the data types which people consider the two displays are most appropriate for. Participants in this survey were 195 teachers and undergraduates from four universities in Beijing. The results showed people's different attitudes towards the two forms of display.展开更多
This paper focuses on the type synthesis of two degree-of-freedom(2-DoF) rotational parallel mechanisms(RPMs) that would be applied as mechanisms actuating the inter-satellite link antenna. Based upon Lie group theory...This paper focuses on the type synthesis of two degree-of-freedom(2-DoF) rotational parallel mechanisms(RPMs) that would be applied as mechanisms actuating the inter-satellite link antenna. Based upon Lie group theory, two steps are necessary to synthesize 2-DoF RPMs except describing the continuous desired motions of the moving platform. They are respectively generation of required open-loop limbs between the fixed base and the moving platform and definition of assembly principles for these limbs. Firstly, all available displacement subgroups or submanifolds are obtained readily according to that the continuous motion of the moving platform is the intersection of those of all open-loop limbs. These subgroups or submanifolds are used to generate all the topology structures of limbs. By describing the characteristics of the displacement subgroups and submanifolds intuitively through employing simple geometrical symbols, their intersection and union operations can be carried out easily. Based on this, the assembly principles of two types are defined to synthesize all 2-DoF RPMs using obtained limbs. Finally, two novel categories of 2-DoF RPMs are provided by introducing a circular track and an articulated rotating platform,respectively. This work can lay the foundations for analysis and optimal design of 2-DoF RPMs that actuate the inter-satellite link antenna.展开更多
The existing data mining methods are mostly focused on relational databases and structured data, but not on complex structured data (like in extensible markup language(XML)). By converting XML document type descriptio...The existing data mining methods are mostly focused on relational databases and structured data, but not on complex structured data (like in extensible markup language(XML)). By converting XML document type description to the relational semantic recording XML data relations, and using an XML data mining language, the XML data mining system presents a strategy to mine information on XML.展开更多
Machine-type communication (MTC) devices provide a broad range of data collection especially on the massive data generated environments such as urban, industrials and event-enabled areas. In dense deployments, the dat...Machine-type communication (MTC) devices provide a broad range of data collection especially on the massive data generated environments such as urban, industrials and event-enabled areas. In dense deployments, the data collected at the closest locations between the MTC devices are spatially correlated. In this paper, we propose a k-means grouping technique to combine all MTC devices based on spatially correlated. The MTC devices collect the data on the event-based area and then transmit to the centralized aggregator for processing and computing. With the limitation of computational resources at the centralized aggregator, some grouped MTC devices data offloaded to the nearby base station collocated with the mobile edge-computing server. As a sensing capability adopted on MTC devices, we use a power exponential function model to compute a correlation coefficient existing between the MTC devices. Based on this framework, we compare the energy consumption when all data processed locally at centralized aggregator or offloaded at mobile edge computing server with optimal solution obtained by the brute force method. Then, the simulation results revealed that the proposed k-means grouping technique reduce the energy consumption at centralized aggregator while satisfying the required completion time.展开更多
Enrichment analysis methods, e.g., gene set enrichment analysis, represent one class of important bio- informatical resources for mining patterns in biomedical datasets. However, tools for inferring patterns and rules...Enrichment analysis methods, e.g., gene set enrichment analysis, represent one class of important bio- informatical resources for mining patterns in biomedical datasets. However, tools for inferring patterns and rules of a list of drugs are limited. In this study, we developed a web-based tool, DrugPattern, for drug set enrichment analysis. We first collected and curated 7019 drug sets, including indications, adverse reactions, targets, pathways, etc. from public databases. For a list of interested drugs, DrugPat- tern then evaluates the significance of the enrichment of these drugs in each of the 7019 drug sets. To validate DrugPattern, we employed it for the prediction of the effects of oxidized low-density lipoprotein (oxLDL), a factor expected to be deleterious. We predicted that oxLDL has beneficial effects on some diseases, most of which were supported by evidence in the literature. Because DrugPattern predicted the potential beneficial effects of oxLDL in type 2 diabetes (T2D), animal experiments were then performed to further verify this prediction. As a result, the experimental evidences validated the DrugPattern prediction that oxLDL indeed has beneficial effects on T2D in the case of energy restriction. These data confirmed the prediction accuracy of our approach and revealed unexpected protective roles for oxLDL in various diseases. This study provides a tool to infer patterns and rules in biomedical datasets based on drug set enrichment analysis.展开更多
The compilation of 1:250,000 vegetation type map in the North-South transitional zone and 1:50,000 vegetation type maps in typical mountainous areas is one of the main tasks of Integrated Scientific Investigation of t...The compilation of 1:250,000 vegetation type map in the North-South transitional zone and 1:50,000 vegetation type maps in typical mountainous areas is one of the main tasks of Integrated Scientific Investigation of the North-South Transitional Zone of China.In the past,vegetation type maps were compiled by a large number of ground field surveys.Although the field survey method is accurate,it is not only time-consuming,but also only covers a small area due to the limitations of physical environment conditions.Remote sensing data can make up for the limitation of field survey because of its full coverage.However,there are still some difficulties and bottlenecks in the extraction of remote sensing information of vegetation types,especially in the automatic extraction.As an example of the compilation of 1:50,000 vegetation type map,this paper explores and studies the remote sensing extraction and mapping methods of vegetation type with medium and large scales based on mountain altitudinal belts of Taibai Mountain,using multi-temporal high resolution remote sensing data,ground survey data,previous vegetation type map and forest survey data.The results show that:1)mountain altitudinal belts can effectively support remote sensing classification and mapping of 1:50,000 vegetation type map in mountain areas.Terrain constraint factors with mountain altitudinal belt information can be generated by mountain altitudinal belts and 1:10,000 Digital Surface Model(DSM)data of Taibai Mountain.Combining the terrain constraint factors with multi-temporal and high-resolution remote sensing data,ground survey data and previous small-scale vegetation type map data,the vegetation types at all levels can be extracted effectively.2)The basic remote sensing interpretation and mapping process for typical mountains is interpretation of vegetation type-groups→interpretation of vegetation formation groups,formations and subformations→interpretation and classification of vegetation types&subtypes,which is a combination method of top-down method and bottom-up method,not the top-down or the bottom-up classification according to the level of mapping units.The results of this study provide a demonstration and scientific basis for the compilation of large and medium scale vegetation type maps.展开更多
文摘During pre-clinical pharmacokinetic research, it is not easy to gather complete pharmacokinetic data in each animal. In some cases, an animal can only provide a single observation. Under this circumstance, it is not clear how to utilize this data to estimate the pharmacokinetic parameters effectively. This study was aimed at comparing a new method to handle such single-observation-per-animal type data with the conventional method in estimating pharmacokinetic parameters. We assumed there were 15 animals within the study receiving a single dose by intravenous injection. Each animal provided one observation point. There were five time points in total, and each time point contained three measurements. The data were simulated with a one-compartment model with first-order elimination. The inter-individual variabilities (ⅡV) were set to 10%, 30% and 50% for both clearance (CL) and apparent volume of distribution (V). A proportional model was used to describe the residual error, which was also set to 10%, 30% and 50%. Two methods (conventional method and the finite msampling method) to handle with the simulated single-observation-per-animal type data in estimating pharmacokinetic parameters were compared. The conventional method (MI) estimated pharmacokinetic parameters directly with original data, i.e., single-observation-per-animal type data. The finite resampling method (M2) was to expand original data to a new dataset by resampling original data with all kinds of combinations by time. After resampling, each individual in the new dataset contained complete pharmacokinetic data, i.e., in this study, there were 243 (C3^1×C3^1×C3^1×C3^1×C3^1) kinds of possible combinations and each of them was a virtual animal. The study was simulated 100 times by the NONMEM software. According to the results, parameter estimates of CL and V by M2 based on the simulated dataset were closer to their true values, though there was a small difference among different combinations of ⅡVs and the residual errors. In general, M2 was less advantageous over M1 when the residual error increased. It was also influenced by the levels of ⅡV as higher levels of IIV could lead to a decrease in the advantage of M2. However, M2 had no ability to estimate the ⅡV of parameters, nor did M1. The finite resampling method could provide more reliable results compared to the conventional method in estimating pharmacokinetic parameters with single-observation-per-animal type data. Compared to the inter-individual variability, the results of estimation were mainly influenced by the residual error.
文摘We address the well-posedness of the 2D(Euler)–Boussinesq equations with zero viscosity and positive diffusivity in the polygonal-like domains with Yudovich’s type data,which gives a positive answer to part of the questions raised in Lai(Arch Ration Mech Anal 199(3):739–760,2011).Our analysis on the the polygonallike domains essentially relies on the recent elliptic regularity results for such domains proved in Bardos et al.(J Math Anal Appl 407(1):69–89,2013)and Di Plinio(SIAM J Math Anal 47(1):159–178,2015).
文摘Exponentiated Generalized Weibull distribution is a probability distribution which generalizes the Weibull distribution introducing two more shapes parameters to best adjust the non-monotonic shape. The parameters of the new probability distribution function are estimated by the maximum likelihood method under progressive type II censored data via expectation maximization algorithm.
基金Supported by the National Natural Science Foundation of China under Grant Nos 11175093,11222545,11435006 and 11375092the K.C.Wong Magna Fund of Ningbo University
文摘We use the latest baryon acoustic oscillation and Union 2.1 type Ia supernova data to test the cosmic opacity between different redshift regions without assuming any cosmological models. It is found that the universe may be opaque between the redshift regions 0.35 0.44, 0.44 0.57 and 0.6-0.73 since the best fit values of cosmic opacity in these regions are positive, while a transparent universe is favored in the redshift region 0.57-0.63. However, in general, a transparent universe is still consistent with observations at the lo confidence level.
基金The Project Supported by National Natural Science Foundation of China
文摘In this paper, we discuss some characteristic properties of partial abstract data type (PADT) and show the diffrence between PADT and abstract data type (ADT) in specification of programming language. Finally, we clarify that PADT is necessary in programming language description.
文摘Type-I censoring mechanism arises when the number of units experiencing the event is random but the total duration of the study is fixed. There are a number of mathematical approaches developed to handle this type of data. The purpose of the research was to estimate the three parameters of the Frechet distribution via the frequentist Maximum Likelihood and the Bayesian Estimators. In this paper, the maximum likelihood method (MLE) is not available of the three parameters in the closed forms;therefore, it was solved by the numerical methods. Similarly, the Bayesian estimators are implemented using Jeffreys and gamma priors with two loss functions, which are: squared error loss function and Linear Exponential Loss Function (LINEX). The parameters of the Frechet distribution via Bayesian cannot be obtained analytically and therefore Markov Chain Monte Carlo is used, where the full conditional distribution for the three parameters is obtained via Metropolis-Hastings algorithm. Comparisons of the estimators are obtained using Mean Square Errors (MSE) to determine the best estimator of the three parameters of the Frechet distribution. The results show that the Bayesian estimation under Linear Exponential Loss Function based on Type-I censored data is a better estimator for all the parameter estimates when the value of the loss parameter is positive.
文摘To improve high quality and/or retain achieved high quality of an academic program, time to time evaluation for quality of each covered course is often an integrated aspect considered in reputed institutions, however, there has been little effort regarding humanities courses. This research article deals with analysis of evaluation data collected regarding humanities course from a College of Commerce & Economics, Mumbai, Maharashtra, India, on Likert type items. Appropriateness of one parametric measure and three non-parametric measures are discussed and used in this regard which could provide useful clues for educational policy planners. Keeping in view of the analytical results using these four measures, regardless of the threshold regarding satisfaction among students, overall performance of almost every subject has been un-satisfactory. There is a need to make a focused approach to take every course at the level of high performance. The inconsistency noticed under every threshold further revealed that under such poorly performing subjects globally, one needs to analyze merely at the global level item. Once the global level analysis reveals high performance of a course, then only item specific analysis may need to be focused to find out the items requiring further improvements.
基金Project supported partly by the National Basic Research Program (973) of China (No. 2002B312103)+2 种基金the National Natural Science Foundation of China (No. 3027466)the Chinese Academy of Sciences
文摘This study aimed at investigating the characteristics of table and graph that people perceive and the data types which people consider the two displays are most appropriate for. Participants in this survey were 195 teachers and undergraduates from four universities in Beijing. The results showed people's different attitudes towards the two forms of display.
基金supported by the National Natural Science Foundation of China (No. 51475321)Tianjin Research Program of Application Foundation and Advanced Technology (No. 15JCZDJC38900 and No. 16JCYBJC19300)
文摘This paper focuses on the type synthesis of two degree-of-freedom(2-DoF) rotational parallel mechanisms(RPMs) that would be applied as mechanisms actuating the inter-satellite link antenna. Based upon Lie group theory, two steps are necessary to synthesize 2-DoF RPMs except describing the continuous desired motions of the moving platform. They are respectively generation of required open-loop limbs between the fixed base and the moving platform and definition of assembly principles for these limbs. Firstly, all available displacement subgroups or submanifolds are obtained readily according to that the continuous motion of the moving platform is the intersection of those of all open-loop limbs. These subgroups or submanifolds are used to generate all the topology structures of limbs. By describing the characteristics of the displacement subgroups and submanifolds intuitively through employing simple geometrical symbols, their intersection and union operations can be carried out easily. Based on this, the assembly principles of two types are defined to synthesize all 2-DoF RPMs using obtained limbs. Finally, two novel categories of 2-DoF RPMs are provided by introducing a circular track and an articulated rotating platform,respectively. This work can lay the foundations for analysis and optimal design of 2-DoF RPMs that actuate the inter-satellite link antenna.
文摘The existing data mining methods are mostly focused on relational databases and structured data, but not on complex structured data (like in extensible markup language(XML)). By converting XML document type description to the relational semantic recording XML data relations, and using an XML data mining language, the XML data mining system presents a strategy to mine information on XML.
文摘Machine-type communication (MTC) devices provide a broad range of data collection especially on the massive data generated environments such as urban, industrials and event-enabled areas. In dense deployments, the data collected at the closest locations between the MTC devices are spatially correlated. In this paper, we propose a k-means grouping technique to combine all MTC devices based on spatially correlated. The MTC devices collect the data on the event-based area and then transmit to the centralized aggregator for processing and computing. With the limitation of computational resources at the centralized aggregator, some grouped MTC devices data offloaded to the nearby base station collocated with the mobile edge-computing server. As a sensing capability adopted on MTC devices, we use a power exponential function model to compute a correlation coefficient existing between the MTC devices. Based on this framework, we compare the energy consumption when all data processed locally at centralized aggregator or offloaded at mobile edge computing server with optimal solution obtained by the brute force method. Then, the simulation results revealed that the proposed k-means grouping technique reduce the energy consumption at centralized aggregator while satisfying the required completion time.
基金supported by the Special Project on Precision Medicine under the National Key R&D Program (2016YFC0903003 and 2017YFC0909600)the National Natural Science Foundation of China (Nos. 81670462 and 81422006 to Q.C.+1 种基金 81670748 and 81471035 to J.Y.)Beijing Natural Science Foundation (No.7171006 to J.Y.)
文摘Enrichment analysis methods, e.g., gene set enrichment analysis, represent one class of important bio- informatical resources for mining patterns in biomedical datasets. However, tools for inferring patterns and rules of a list of drugs are limited. In this study, we developed a web-based tool, DrugPattern, for drug set enrichment analysis. We first collected and curated 7019 drug sets, including indications, adverse reactions, targets, pathways, etc. from public databases. For a list of interested drugs, DrugPat- tern then evaluates the significance of the enrichment of these drugs in each of the 7019 drug sets. To validate DrugPattern, we employed it for the prediction of the effects of oxidized low-density lipoprotein (oxLDL), a factor expected to be deleterious. We predicted that oxLDL has beneficial effects on some diseases, most of which were supported by evidence in the literature. Because DrugPattern predicted the potential beneficial effects of oxLDL in type 2 diabetes (T2D), animal experiments were then performed to further verify this prediction. As a result, the experimental evidences validated the DrugPattern prediction that oxLDL indeed has beneficial effects on T2D in the case of energy restriction. These data confirmed the prediction accuracy of our approach and revealed unexpected protective roles for oxLDL in various diseases. This study provides a tool to infer patterns and rules in biomedical datasets based on drug set enrichment analysis.
基金National Natural Science Foundation of China,No.41871350,No.41571099Scientific and Technological Basic Resources Survey Project,No.2017FY 100900。
文摘The compilation of 1:250,000 vegetation type map in the North-South transitional zone and 1:50,000 vegetation type maps in typical mountainous areas is one of the main tasks of Integrated Scientific Investigation of the North-South Transitional Zone of China.In the past,vegetation type maps were compiled by a large number of ground field surveys.Although the field survey method is accurate,it is not only time-consuming,but also only covers a small area due to the limitations of physical environment conditions.Remote sensing data can make up for the limitation of field survey because of its full coverage.However,there are still some difficulties and bottlenecks in the extraction of remote sensing information of vegetation types,especially in the automatic extraction.As an example of the compilation of 1:50,000 vegetation type map,this paper explores and studies the remote sensing extraction and mapping methods of vegetation type with medium and large scales based on mountain altitudinal belts of Taibai Mountain,using multi-temporal high resolution remote sensing data,ground survey data,previous vegetation type map and forest survey data.The results show that:1)mountain altitudinal belts can effectively support remote sensing classification and mapping of 1:50,000 vegetation type map in mountain areas.Terrain constraint factors with mountain altitudinal belt information can be generated by mountain altitudinal belts and 1:10,000 Digital Surface Model(DSM)data of Taibai Mountain.Combining the terrain constraint factors with multi-temporal and high-resolution remote sensing data,ground survey data and previous small-scale vegetation type map data,the vegetation types at all levels can be extracted effectively.2)The basic remote sensing interpretation and mapping process for typical mountains is interpretation of vegetation type-groups→interpretation of vegetation formation groups,formations and subformations→interpretation and classification of vegetation types&subtypes,which is a combination method of top-down method and bottom-up method,not the top-down or the bottom-up classification according to the level of mapping units.The results of this study provide a demonstration and scientific basis for the compilation of large and medium scale vegetation type maps.