The precise and accurate knowledge of genetic parameters is a prerequisite for making efficient selection strategies in breeding programs.A number of estimators of heritability about important economic traits in many ...The precise and accurate knowledge of genetic parameters is a prerequisite for making efficient selection strategies in breeding programs.A number of estimators of heritability about important economic traits in many marine mollusks are available in the literature,however very few research have evaluated about the accuracy of genetic parameters estimated with different family structures.Thus,in the present study,the effect of parent sample size for estimating the precision of genetic parameters of four growth traits in clam M.meretrix by factorial designs were analyzed through restricted maximum likelihood(REML) and Bayesian.The results showed that the average estimated heritabilities of growth traits obtained from REML were 0.23-0.32 for 9 and 16 full-sib families and 0.19-0.22 for 25 full-sib families.When using Bayesian inference,the average estimated heritabilities were0.11-0.12 for 9 and 16 full-sib families and 0.13-0.16 for 25 full-sib families.Compared with REML,Bayesian got lower heritabilities,but still remained at a medium level.When the number of parents increased from 6 to 10,the estimated heritabilities were more closed to 0.20 in REML and 0.12 in Bayesian inference.Genetic correlations among traits were positive and high and had no significant difference between different sizes of designs.The accuracies of estimated breeding values from the 9 and 16 families were less precise than those from 25 families.Our results provide a basic genetic evaluation for growth traits and should be useful for the design and operation of a practical selective breeding program in the clam M.meretrix.展开更多
To clarify the most appropriate sample size for obtaining phenotypic data for a single line,we investigated the main-effect QTL(M-QTL) of a quantitative trait plant height(ph) in a recombinant inbred line(RIL) populat...To clarify the most appropriate sample size for obtaining phenotypic data for a single line,we investigated the main-effect QTL(M-QTL) of a quantitative trait plant height(ph) in a recombinant inbred line(RIL) population of rice(derived from the cross between Xieqingzao B and Zhonghui 9308) using five individual plants in 2006 and 2009.Twenty-six ph phenotypic datasets from the completely random combinations of 2,3,4,and 5 plants in a single line,and five ph phenotypic datasets from five individual plants were used to detect the QTLs.Fifteen M-QTLs were detected by 1 to 31 datasets.Of these,qph7a was detected repeatedly by all the 31 ph datasets in 2006 and explained 11.67% to 23.93% of phenotypic variation;qph3 was detected repeatedly by all the 31 datasets and explained 5.21% to 7.93% and 11.51% to 24.46% of phenotypic variance in 2006 and 2009,respectively.The results indicate that the M-QTL for a quantitative trait could be detected repeatedly by the phenotypic values from 5 individual plants and 26 sets of completely random combinations of phenotypic data within a single line in an RIL population under different environments.The sample size for a single line of the RIL population did not affect the efficiency for identification of stably expressed M-QTLs.展开更多
Surface/underwater target classification is a key topic in marine information research.However,the complex underwater environment,coupled with the diversity of target types and their variable characteristics,presents ...Surface/underwater target classification is a key topic in marine information research.However,the complex underwater environment,coupled with the diversity of target types and their variable characteristics,presents significant challenges for classifier design.For shallow-water waveguides with a negative thermocline,a residual neural network(ResNet)model based on the sound field elevation structure is constructed.This model demonstrates robust classification performance even when facing low signal-to-noise ratios and environmental mismatches.Meanwhile,to address the reduced generalization ability caused by limited labeled acoustic data,an improved ResNet model based on unsupervised domain adaptation(“proposed UDA-ResNet”)is further constructed.This model incorporates data on simulated elevation structures of the sound field to augment the training process.Adversarial training is employed to extract domain-invariant features from simulated and trial data.These strategies help reduce the negative impact caused by domain differences.Experimental results demonstrate that the proposed method shows strong surface/underwater target classification ability under limited sample sizes,thus confirming its feasibility and effectiveness.展开更多
Reliability assessment of the braking system in a high?speed train under small sample size and zero?failure data is veryimportant for safe operation. Traditional reliability assessment methods are only performed well ...Reliability assessment of the braking system in a high?speed train under small sample size and zero?failure data is veryimportant for safe operation. Traditional reliability assessment methods are only performed well under conditions of large sample size and complete failure data,which lead to large deviation under conditions of small sample size and zero?failure data. To improve this problem,a new Bayesian method is proposed. Based on the characteristics of the solenoid valve in the braking system of a high?speed train,the modified Weibull distribution is selected to describe the failure rate over the entire lifetime. Based on the assumption of a binomial distribution for the failure probability at censored time,a concave method is employed to obtain the relationships between accumulation failure prob?abilities. A numerical simulation is performed to compare the results of the proposed method with those obtained from maximum likelihood estimation,and to illustrate that the proposed Bayesian model exhibits a better accuracy for the expectation value when the sample size is less than 12. Finally,the robustness of the model is demonstrated by obtaining the reliability indicators for a numerical case involving the solenoid valve of the braking system,which shows that the change in the reliability and failure rate among the di erent hyperparameters is small. The method is provided to avoid misleading of subjective information and improve accuracy of reliability assessment under condi?tions of small sample size and zero?failure data.展开更多
Knowledge on spatial distribution and sampling size optimization of soil copper (Cu) could lay solid foundations for environmetal quality survey of agricultural soils at county scale. In this investigation, cokrigin...Knowledge on spatial distribution and sampling size optimization of soil copper (Cu) could lay solid foundations for environmetal quality survey of agricultural soils at county scale. In this investigation, cokriging method was used to conduct the interpolation of Cu concentraiton in cropland soil in Shuangliu County, Sichuan Province, China. Based on the original 623 physicochmically measured soil samples, 560, 498, and 432 sub-samples were randomly selected as target variable and soil organic matter (SOM) of the whole original samples as auxiliary variable. Interpolation results using Cokriging under different sampling numbers were evaluated for their applicability in estimating the spatial distribution of soil Cu at county sacle. The results showed that the root mean square error (RMSE) produced by Cokriging decreased from 0.9 to 7.77%, correlation coefficient between the predicted values and the measured increased from 1.76 to 9.76% in comparison with the ordinary Kriging under the corresponding sample sizes. The prediction accuracy using Cokriging was still higher than original 623 data using ordinary Kriging even as sample size reduced 10%, and their interpolation maps were highly in agreement. Therefore, Cokriging was proven to be a more accurate and economic method which could provide more information and benefit for the studies on spatial distribution of soil pollutants at county scale.展开更多
The development of a core collection could enhance the utilization of germplasm collections in crop improvement programs and simplify their management. Selection of an appropriate sampling strategy is an important pre...The development of a core collection could enhance the utilization of germplasm collections in crop improvement programs and simplify their management. Selection of an appropriate sampling strategy is an important prerequisite to construct a core collection with appropriate size in order to adequately represent the genetic spectrum and maximally capture the genetic diversity in available crop collections. The present study was initiated to construct nested core collections to determine the appropriate sample size to represent the genetic diversity of rice landrace collection based on 15 quantitative traits and 34 qualitative traits of 2 262 rice accessions. The results showed that 50-225 nested core collections, whose sampling rate was 2.2%-9.9%, were sufficient to maintain the maximum genetic diversity of the initial collections. Of these, 150 accessions (6.6%) could capture the maximal genetic diversity of the initial collection. Three data types, i.e. qualitative traits (QT1), quantitative traits (QT2) and integrated qualitative and quantitative traits (QTT), were compared for their efficiency in constructing core collections based on the weighted pair-group average method combined with stepwise clustering and preferred sampling on adjusted Euclidean distances. Every combining scheme constructed eight rice core collections (225, 200, 175, 150, 125, 100, 75 and 50). The results showed that the QTT data was the best in constructing a core collection as indicated by the genetic diversity of core collections. A core collection constructed only on the information of QT1 could not represent the initial collection effectively. QTT should be used together to construct a productive core collection.展开更多
In order to investigate the effect of sample size on the dynamic torsional behaviour of the 2A12 aluminium alloy. In this paper, torsional split Hopkinson bar tests are conducted on this alloy with different sample di...In order to investigate the effect of sample size on the dynamic torsional behaviour of the 2A12 aluminium alloy. In this paper, torsional split Hopkinson bar tests are conducted on this alloy with different sample dimensions. It is found that with the decreasing gauge length and thickness, the tested yield strength increases. However, the sample innerlouter diameter has little effect on the dynamic torsional behaviour. Based on the finite element method, the stress states in the alloy with different sample sizes are analysed. Due to the effect of stress concentration zone (SCZ), the shorter sample has a higher yield stress. Furthermore, the stress distributes more uniformly in the thinner sample, which leads to the higher tested yield stress. According to the experimental and simulation analysis, some suggestions on choosing the sample size are given as well.展开更多
In the October 2014 publication of JAMA,Dr.Hinman and colleagues published the study"Acupuncture for Chronic Knee Pain:A Randomized Clinical Trial,"in which the authors concluded that"in patients older than50 year...In the October 2014 publication of JAMA,Dr.Hinman and colleagues published the study"Acupuncture for Chronic Knee Pain:A Randomized Clinical Trial,"in which the authors concluded that"in patients older than50 years with moderate or severe chronic knee pain,neither laser nor needle acupuncture conferred benefi t over sham for pain or function.Our fi ndings do not support acupuncture[1]展开更多
This study used Ecopath model of the Jiaozhou Bay as an example to evaluate the effect of stomach sample size of three fish species on the projection of this model. The derived ecosystem indices were classified into t...This study used Ecopath model of the Jiaozhou Bay as an example to evaluate the effect of stomach sample size of three fish species on the projection of this model. The derived ecosystem indices were classified into three categories:(1) direct indices, like the trophic level of species, influenced by stomach sample size directly;(2)indirect indices, like ecology efficiency(EE) of invertebrates, influenced by the multiple prey-predator relationships;and(3) systemic indices, like total system throughout(TST), describing the status of the whole ecosystem. The influences of different stomach sample sizes on these indices were evaluated. The results suggest that systemic indices of the ecosystem model were robust to stomach sample sizes, whereas specific indices related to species were indicated to be with low accuracy and precision when stomach samples were insufficient.The indices became more uncertain when the stomach sample sizes varied for more species. This study enhances the understanding of how the quality of diet composition data influences ecosystem modeling outputs. The results can also guide the design of stomach content analysis for developing ecosystem models.展开更多
The longitudinal dispersion of the projectile in shooting tests of two-dimensional trajectory corrections fused with fixed canards is extremely large that it sometimes exceeds the correction ability of the correction ...The longitudinal dispersion of the projectile in shooting tests of two-dimensional trajectory corrections fused with fixed canards is extremely large that it sometimes exceeds the correction ability of the correction fuse actuator.The impact point easily deviates from the target,and thus the correction result cannot be readily evaluated.However,the cost of shooting tests is considerably high to conduct many tests for data collection.To address this issue,this study proposes an aiming method for shooting tests based on small sample size.The proposed method uses the Bootstrap method to expand the test data;repeatedly iterates and corrects the position of the simulated theoretical impact points through an improved compatibility test method;and dynamically adjusts the weight of the prior distribution of simulation results based on Kullback-Leibler divergence,which to some extent avoids the real data being"submerged"by the simulation data and achieves the fusion Bayesian estimation of the dispersion center.The experimental results show that when the simulation accuracy is sufficiently high,the proposed method yields a smaller mean-square deviation in estimating the dispersion center and higher shooting accuracy than those of the three comparison methods,which is more conducive to reflecting the effect of the control algorithm and facilitating test personnel to iterate their proposed structures and algorithms.;in addition,this study provides a knowledge base for further comprehensive studies in the future.展开更多
Sample size re-estimation is essential in oncology studies. However, the use of blinded sample size reassessment for survival data has been rarely reported. Based on the density function of the exponential distributio...Sample size re-estimation is essential in oncology studies. However, the use of blinded sample size reassessment for survival data has been rarely reported. Based on the density function of the exponential distribution, an expectation-maximization(EM) algorithm of the hazard ratio was derived, and several simulation studies were used to verify its applications. The method had obvious variation in the hazard ratio estimates and overestimation for the relatively small hazard ratios. Our studies showed that the stability of the EM estimation results directly correlated with the sample size, the convergence of the EM algorithm was impacted by the initial values, and a balanced design produced the best estimates. No reliable blinded sample size re-estimation inference can be made in our studies, but the results provide useful information to steer the practitioners in this field from repeating the same endeavor.展开更多
BACKGROUND Of 25%of randomised controlled trials(RCTs)on interventions for inflammatory bowel disease(IBD)have no power calculation.AIM To systematically review RCTs reporting interventions for the management of IBD a...BACKGROUND Of 25%of randomised controlled trials(RCTs)on interventions for inflammatory bowel disease(IBD)have no power calculation.AIM To systematically review RCTs reporting interventions for the management of IBD and to produce data for minimum sample sizes that would achieve appropriate power using the actual clinical data.METHODS We included RCTs retrieved from Cochrane IBD specialised Trial register and CENTRAL investigating any form of therapy for either induction or maintenance of remission against control,placebo,or no intervention of IBD in patients of any age.The relevant data was extracted,and the studies were grouped according to the intervention used.We recalculated sample size and the achieved difference,as well as minimum sample sizes needed in the future.RESULTS A total of 105 trials were included.There was a large discrepancy between the estimated figure for the minimal clinically important difference used for the calculations(15%group differences observed vs 30%used for calculation)explaining substantial actual sample size deficits.The minimum sample sizes indicated for future trials based on the 25 years of trial data were calculated and grouped by the intervention.CONCLUSION A third of intervention studies in IBD within the last 25 years are underpowered,with large variations in the calculation of sample sizes.The authors present a sample size estimate resource constructed on the published evidence base for future researchers and key stakeholders within the IBD trial field.展开更多
Accurate prediction of the internal corrosion rates of oil and gas pipelines could be an effective way to prevent pipeline leaks.In this study,a proposed framework for predicting corrosion rates under a small sample o...Accurate prediction of the internal corrosion rates of oil and gas pipelines could be an effective way to prevent pipeline leaks.In this study,a proposed framework for predicting corrosion rates under a small sample of metal corrosion data in the laboratory was developed to provide a new perspective on how to solve the problem of pipeline corrosion under the condition of insufficient real samples.This approach employed the bagging algorithm to construct a strong learner by integrating several KNN learners.A total of 99 data were collected and split into training and test set with a 9:1 ratio.The training set was used to obtain the best hyperparameters by 10-fold cross-validation and grid search,and the test set was used to determine the performance of the model.The results showed that theMean Absolute Error(MAE)of this framework is 28.06%of the traditional model and outperforms other ensemblemethods.Therefore,the proposed framework is suitable formetal corrosion prediction under small sample conditions.展开更多
After finishing 102 replicate constant amplitude crack initiation and growth tests on Ly12-CZ aluminum alloy plate, a statistical investigation of the fatigue crack initiation and growth process is conducted in this p...After finishing 102 replicate constant amplitude crack initiation and growth tests on Ly12-CZ aluminum alloy plate, a statistical investigation of the fatigue crack initiation and growth process is conducted in this paper. According to the post-mortem fractographic examination by scanning electron microscopy (SEM), some qualitative observations of the spacial correlation among fatigue striations are developed to reveal the statistical nature of material intrinsic inhomogeneity during the crack growth process. From the test data, an engineering division between crack initiation and growth is defined as the upper limit of small crack. The distributions of crack initiation life N-i, growth life N, and the statistical characteristics of crack growth rate da/dN are also investigated. It is hoped that the work will provide a solid test basis for the study of probabilistic fatigue, probabilistic fracture mechanics, fatigue reliability and its engineering applications.展开更多
Sample size can be a key design feature that not only affects the probability of a trial's success but also determines the duration and feasibility of a trial. If an investigational drug is expected to be effective a...Sample size can be a key design feature that not only affects the probability of a trial's success but also determines the duration and feasibility of a trial. If an investigational drug is expected to be effective and address unmet medical needs of an orphan disease, where the accrual period may require many years with a large sample size to detect a minimal clinically relevant treatment effect, a minimum sample size may be set to maintain nominal power. In limited situations such as this, there may be a need for flexibility in the initial and final sample sizes; thus, it is useful to consider the utility of adaptive sample size designs that use sample size re-estimation or group sequential design. In this paper, we propose a new adaptive performance measure to consider the utility of an adaptive sample size design in a trial simulation. Considering that previously proposed sample size re-estimation methods do not take into account errors in estimation based on interim results, we propose Bayesian sample size re-estimation criteria that take into account prior information on treatment effect, and then, we assess its operating characteristics in a simulation study. We also present a review example of sample size re-estimation mainly based on published paper and review report in Pharmaceuticals and Medical Devices Agency (PMDA).展开更多
Road surface condition evaluation involves the collection of data over pavement surface for different types of distresses.The exercise consumes a lot of resources if the whole road section length is surveyed and may b...Road surface condition evaluation involves the collection of data over pavement surface for different types of distresses.The exercise consumes a lot of resources if the whole road section length is surveyed and may be prone to errors as a result of surveyors'fatigue.It is therefore important to develop a representative sample to be used when evaluating road condition manually.This study aimed at determining an adequate sample size for section level as well as a way forward for network level condition evaluation of highways in Nepal.Again the study was conducted to quantify the effects of altering the sample unit size for performing a distress survey according to the PCI(pavement condition index)and SDI(surface distress index)method separately for asphalt surfaced roads.The effect of reducing/increasing sample unit size was investigated adopting visual examination through field survey by eight teams in July,2015,along the section of Banepa-Bardibas highway.The PCI was then calculated for each sample unit using standard deduct curves and PCI calculation methodology as per SHRP(Strategic Highway Research Program)recommendations and the computation of SDI was done as per DoR(Department of Roads)guidelines.The results show that 13%sample unit are needed for SDI and 21%for PCI computation,however,the results are out of the significant level.This is higher than DoR and SHRP guidelines.Again no strong relationship is observed between SDI and PCI values.展开更多
Sample size determination typically relies on a power analysis based on a frequentist conditional approach. This latter can be seen as a particular case of the two-priors approach, which allows to build four distinct ...Sample size determination typically relies on a power analysis based on a frequentist conditional approach. This latter can be seen as a particular case of the two-priors approach, which allows to build four distinct power functions to select the optimal sample size. We revise this approach when the focus is on testing a single binomial proportion. We consider exact methods and introduce a conservative criterion to account for the typical non-monotonic behavior of the power functions, when dealing with discrete data. The main purpose of this paper is to present a Shiny App providing a user-friendly, interactive tool to apply these criteria. The app also provides specific tools to elicit the analysis and the design prior distributions, which are the core of the two-priors approach.展开更多
Species distribution patterns is one of the important topics in ecology and biological conservation.Although species distribution models have been intensively used in the research,the effects of spatial associations a...Species distribution patterns is one of the important topics in ecology and biological conservation.Although species distribution models have been intensively used in the research,the effects of spatial associations and spatial dependence have been rarely taken into account in the modeling processes.Recently,Joint Species Distribution Models(JSDMs)offer the opportunity to consider both environmental factors and interspecific relationships as well as the role of spatial structures.This study uses the HMSC(Hierarchical Modelling of Species Communities)framework to model the multispecies distribution of a marine fish assemblage,in which spatial associations and spatial dependence is deliberately accounted for.Three HMSC models were implemented with different structures of random effects to address the existence of spatial associations and spatial dependence,and the predictive performances at different levels of sample sizes were analyzed in the assessment.The results showed that the models with random effects could account for a larger proportion of explainable variance(32.8%),and particularly the spatial random effect model provided the best predictive performances(R_(mean)^(2)=0.31),indicating that spatial random effects could substantially influence the results of the joint species distribution.Increasing sample size had a strong effect(R_(mean)^(2)=0.24-0.31)on the predictive accuracy of the spatially-structured model than on the other models,suggesting that optimal model selection should be dependent on sample size.This study highlights the importance of incorporating spatial random effects for JSDM predictions and suggests that the choice of model structures should consider the data quality across species.展开更多
The sap flow method is widely used to estimate forest transpiration.However,at the individual tree level it has spatiotemporal variations due to the impacts of environmental conditions and spatial relationships among ...The sap flow method is widely used to estimate forest transpiration.However,at the individual tree level it has spatiotemporal variations due to the impacts of environmental conditions and spatial relationships among trees.Therefore,an in-depth understanding of the coupling effects of these factors is important for designing sap flow measurement methods and performing accurate assessments of stand scale transpiration.This study is based on observations of sap flux density(SF_(d))of nine sample trees with different Hegyi’s competition indices(HCIs),soil moisture,and meteorological conditions in a pure plantation of Larix gmelinii var.principis-rupprechtii during the 2021 growing season(May to September).A multifactorial model of sap flow was developed and possible errors in the stand scale sap flow estimates associated with sample sizes were determined using model-based predictions of sap flow.Temporal variations are controlled by vapour pressure deficit(VPD),solar radiation(R),and soil moisture,and these relationships can be described by polynomial or saturated exponential functions.Spatial(individual)differences were influenced by the HCI,as shown by the decaying power function.A simple SF_(d)model at the individual tree level was developed to describe the synergistic influences of VPD,R,soil moisture,and HCI.The coefficient of variations(CV)of the sap flow estimates gradually stabilized when the sample size was>10;at least six sample trees were needed if the CV was within 10%.This study improves understanding of the mechanisms of spatiotemporal variations in sap flow at the individual tree level and provides a new methodology for determining the optimal sample size for sap flow measurements.展开更多
Conventional soil maps(CSMs)often have multiple soil types within a single polygon,which hinders the ability of machine learning to accurately predict soils.Soil disaggregation approaches are commonly used to improve ...Conventional soil maps(CSMs)often have multiple soil types within a single polygon,which hinders the ability of machine learning to accurately predict soils.Soil disaggregation approaches are commonly used to improve the spatial and attribute precision of CSMs.The approach disaggregation and harmonization of soil map units through resampled classification trees(DSMART)is popular but computationally intensive,as it generates and assigns synthetic samples to soil series based on the areal coverage information of CSMs.Alternatively,the disaggregation approach pure polygon disaggregation(PPD)assigns soil series based solely on the proportions of soil series in pure polygons in CSMs.This study compared these two disaggregation approaches by applying them to a CSM of Middlesex County,Ontario,Canada.Four different sampling methods were used:two sampling designs,simple random sampling(SRS)and conditional Latin hypercube sampling(cLHS),with two sample sizes(83100 and 19420 samples per sampling plan),both based on an area-weighted approach.Two machine learning algorithms(MLAs),C5.0 decision tree(C5.0)and random forest(RF),were applied to the disaggregation approaches to compare the disaggregation accuracy.The accuracy assessment utilized a set of 500 validation points obtained from the Middlesex County soil survey report.The MLA C5.0(Kappa index=0.58–0.63)showed better performance than RF(Kappa index=0.53–0.54)based on the larger sample size,and PPD with C5.0 based on the larger sample size was the best-performing(Kappa index=0.63)approach.Based on the smaller sample size,both cLHS(Kappa index=0.41–0.48)and SRS(Kappa index=0.40–0.47)produced similar accuracy results.The disaggregation approach PPD exhibited lower processing capacity and time demands(1.62–5.93 h)while yielding maps with lower uncertainty as compared to DSMART(2.75–194.2 h).For CSMs predominantly composed of pure polygons,utilizing PPD for soil series disaggregation is a more efficient and rational choice.However,DSMART is the preferable approach for disaggregating soil series that lack pure polygon representations in the CSMs.展开更多
基金The National High Technology Research and Development Program(863 program)of China under contract No.2012AA10A410the Zhejiang Science and Technology Project of Agricultural Breeding under contract No.2012C12907-4the Scientific and Technological Innovation Project financially supported by Qingdao National Laboratory for Marine Science and Technology under contract No.2015ASKJ02
文摘The precise and accurate knowledge of genetic parameters is a prerequisite for making efficient selection strategies in breeding programs.A number of estimators of heritability about important economic traits in many marine mollusks are available in the literature,however very few research have evaluated about the accuracy of genetic parameters estimated with different family structures.Thus,in the present study,the effect of parent sample size for estimating the precision of genetic parameters of four growth traits in clam M.meretrix by factorial designs were analyzed through restricted maximum likelihood(REML) and Bayesian.The results showed that the average estimated heritabilities of growth traits obtained from REML were 0.23-0.32 for 9 and 16 full-sib families and 0.19-0.22 for 25 full-sib families.When using Bayesian inference,the average estimated heritabilities were0.11-0.12 for 9 and 16 full-sib families and 0.13-0.16 for 25 full-sib families.Compared with REML,Bayesian got lower heritabilities,but still remained at a medium level.When the number of parents increased from 6 to 10,the estimated heritabilities were more closed to 0.20 in REML and 0.12 in Bayesian inference.Genetic correlations among traits were positive and high and had no significant difference between different sizes of designs.The accuracies of estimated breeding values from the 9 and 16 families were less precise than those from 25 families.Our results provide a basic genetic evaluation for growth traits and should be useful for the design and operation of a practical selective breeding program in the clam M.meretrix.
基金supported by the grants from the Chinese Natural Science Foundation(Grant No.31071398)the National Program on Super Rice Breeding,the Ministry of Agriculture(Grant No.2010-3)+1 种基金National High Technology Research and Development Program of China(Grant No.2006AA10Z1E8)the Provincial Program of ‘8812’,Zhejiang Province,China(Grant No.8812-1)
文摘To clarify the most appropriate sample size for obtaining phenotypic data for a single line,we investigated the main-effect QTL(M-QTL) of a quantitative trait plant height(ph) in a recombinant inbred line(RIL) population of rice(derived from the cross between Xieqingzao B and Zhonghui 9308) using five individual plants in 2006 and 2009.Twenty-six ph phenotypic datasets from the completely random combinations of 2,3,4,and 5 plants in a single line,and five ph phenotypic datasets from five individual plants were used to detect the QTLs.Fifteen M-QTLs were detected by 1 to 31 datasets.Of these,qph7a was detected repeatedly by all the 31 ph datasets in 2006 and explained 11.67% to 23.93% of phenotypic variation;qph3 was detected repeatedly by all the 31 datasets and explained 5.21% to 7.93% and 11.51% to 24.46% of phenotypic variance in 2006 and 2009,respectively.The results indicate that the M-QTL for a quantitative trait could be detected repeatedly by the phenotypic values from 5 individual plants and 26 sets of completely random combinations of phenotypic data within a single line in an RIL population under different environments.The sample size for a single line of the RIL population did not affect the efficiency for identification of stably expressed M-QTLs.
基金supported by the National Natural Science Foundation of China(Grant Nos.62471024 and 62301183)the Open Research Fund of Hanjiang Laboratory(KF2024001).
文摘Surface/underwater target classification is a key topic in marine information research.However,the complex underwater environment,coupled with the diversity of target types and their variable characteristics,presents significant challenges for classifier design.For shallow-water waveguides with a negative thermocline,a residual neural network(ResNet)model based on the sound field elevation structure is constructed.This model demonstrates robust classification performance even when facing low signal-to-noise ratios and environmental mismatches.Meanwhile,to address the reduced generalization ability caused by limited labeled acoustic data,an improved ResNet model based on unsupervised domain adaptation(“proposed UDA-ResNet”)is further constructed.This model incorporates data on simulated elevation structures of the sound field to augment the training process.Adversarial training is employed to extract domain-invariant features from simulated and trial data.These strategies help reduce the negative impact caused by domain differences.Experimental results demonstrate that the proposed method shows strong surface/underwater target classification ability under limited sample sizes,thus confirming its feasibility and effectiveness.
基金Supported by National Natural Science Foundation of China(Grant No.51175028)Great Scholars Training Project(Grant No.CIT&TCD20150312)Beijing Recognized Talent Project(Grant No.2014018)
文摘Reliability assessment of the braking system in a high?speed train under small sample size and zero?failure data is veryimportant for safe operation. Traditional reliability assessment methods are only performed well under conditions of large sample size and complete failure data,which lead to large deviation under conditions of small sample size and zero?failure data. To improve this problem,a new Bayesian method is proposed. Based on the characteristics of the solenoid valve in the braking system of a high?speed train,the modified Weibull distribution is selected to describe the failure rate over the entire lifetime. Based on the assumption of a binomial distribution for the failure probability at censored time,a concave method is employed to obtain the relationships between accumulation failure prob?abilities. A numerical simulation is performed to compare the results of the proposed method with those obtained from maximum likelihood estimation,and to illustrate that the proposed Bayesian model exhibits a better accuracy for the expectation value when the sample size is less than 12. Finally,the robustness of the model is demonstrated by obtaining the reliability indicators for a numerical case involving the solenoid valve of the braking system,which shows that the change in the reliability and failure rate among the di erent hyperparameters is small. The method is provided to avoid misleading of subjective information and improve accuracy of reliability assessment under condi?tions of small sample size and zero?failure data.
基金supported by the Youth Foundation from Sichuan Education Bureau (2006B009)Key Project from Sichuan Education Bureau (2006A008)Sichuan Youth Science & Technology Foundation,China (06ZQ026-020)
文摘Knowledge on spatial distribution and sampling size optimization of soil copper (Cu) could lay solid foundations for environmetal quality survey of agricultural soils at county scale. In this investigation, cokriging method was used to conduct the interpolation of Cu concentraiton in cropland soil in Shuangliu County, Sichuan Province, China. Based on the original 623 physicochmically measured soil samples, 560, 498, and 432 sub-samples were randomly selected as target variable and soil organic matter (SOM) of the whole original samples as auxiliary variable. Interpolation results using Cokriging under different sampling numbers were evaluated for their applicability in estimating the spatial distribution of soil Cu at county sacle. The results showed that the root mean square error (RMSE) produced by Cokriging decreased from 0.9 to 7.77%, correlation coefficient between the predicted values and the measured increased from 1.76 to 9.76% in comparison with the ordinary Kriging under the corresponding sample sizes. The prediction accuracy using Cokriging was still higher than original 623 data using ordinary Kriging even as sample size reduced 10%, and their interpolation maps were highly in agreement. Therefore, Cokriging was proven to be a more accurate and economic method which could provide more information and benefit for the studies on spatial distribution of soil pollutants at county scale.
基金supported by the National Natural Science Foundation of China (Grant No. 30700494)the Principal Fund of South China Agricultural University, China (Grant No. 2003K053)
文摘The development of a core collection could enhance the utilization of germplasm collections in crop improvement programs and simplify their management. Selection of an appropriate sampling strategy is an important prerequisite to construct a core collection with appropriate size in order to adequately represent the genetic spectrum and maximally capture the genetic diversity in available crop collections. The present study was initiated to construct nested core collections to determine the appropriate sample size to represent the genetic diversity of rice landrace collection based on 15 quantitative traits and 34 qualitative traits of 2 262 rice accessions. The results showed that 50-225 nested core collections, whose sampling rate was 2.2%-9.9%, were sufficient to maintain the maximum genetic diversity of the initial collections. Of these, 150 accessions (6.6%) could capture the maximal genetic diversity of the initial collection. Three data types, i.e. qualitative traits (QT1), quantitative traits (QT2) and integrated qualitative and quantitative traits (QTT), were compared for their efficiency in constructing core collections based on the weighted pair-group average method combined with stepwise clustering and preferred sampling on adjusted Euclidean distances. Every combining scheme constructed eight rice core collections (225, 200, 175, 150, 125, 100, 75 and 50). The results showed that the QTT data was the best in constructing a core collection as indicated by the genetic diversity of core collections. A core collection constructed only on the information of QT1 could not represent the initial collection effectively. QTT should be used together to construct a productive core collection.
基金Financial support is from the NSFC(Grant Nos.11602257,11472257,11272300,11572299)funded by the key subject"Computational Solid Mechanics"of the China Academy of Engineering Physics
文摘In order to investigate the effect of sample size on the dynamic torsional behaviour of the 2A12 aluminium alloy. In this paper, torsional split Hopkinson bar tests are conducted on this alloy with different sample dimensions. It is found that with the decreasing gauge length and thickness, the tested yield strength increases. However, the sample innerlouter diameter has little effect on the dynamic torsional behaviour. Based on the finite element method, the stress states in the alloy with different sample sizes are analysed. Due to the effect of stress concentration zone (SCZ), the shorter sample has a higher yield stress. Furthermore, the stress distributes more uniformly in the thinner sample, which leads to the higher tested yield stress. According to the experimental and simulation analysis, some suggestions on choosing the sample size are given as well.
文摘In the October 2014 publication of JAMA,Dr.Hinman and colleagues published the study"Acupuncture for Chronic Knee Pain:A Randomized Clinical Trial,"in which the authors concluded that"in patients older than50 years with moderate or severe chronic knee pain,neither laser nor needle acupuncture conferred benefi t over sham for pain or function.Our fi ndings do not support acupuncture[1]
基金The National Natural Science Foundation of China under contract No.31772852the Fundamental Research Funds for the Central Universities under contract No.201612004。
文摘This study used Ecopath model of the Jiaozhou Bay as an example to evaluate the effect of stomach sample size of three fish species on the projection of this model. The derived ecosystem indices were classified into three categories:(1) direct indices, like the trophic level of species, influenced by stomach sample size directly;(2)indirect indices, like ecology efficiency(EE) of invertebrates, influenced by the multiple prey-predator relationships;and(3) systemic indices, like total system throughout(TST), describing the status of the whole ecosystem. The influences of different stomach sample sizes on these indices were evaluated. The results suggest that systemic indices of the ecosystem model were robust to stomach sample sizes, whereas specific indices related to species were indicated to be with low accuracy and precision when stomach samples were insufficient.The indices became more uncertain when the stomach sample sizes varied for more species. This study enhances the understanding of how the quality of diet composition data influences ecosystem modeling outputs. The results can also guide the design of stomach content analysis for developing ecosystem models.
基金the National Natural Science Foundation of China(Grant No.61973033)Preliminary Research of Equipment(Grant No.9090102010305)for funding the experiments。
文摘The longitudinal dispersion of the projectile in shooting tests of two-dimensional trajectory corrections fused with fixed canards is extremely large that it sometimes exceeds the correction ability of the correction fuse actuator.The impact point easily deviates from the target,and thus the correction result cannot be readily evaluated.However,the cost of shooting tests is considerably high to conduct many tests for data collection.To address this issue,this study proposes an aiming method for shooting tests based on small sample size.The proposed method uses the Bootstrap method to expand the test data;repeatedly iterates and corrects the position of the simulated theoretical impact points through an improved compatibility test method;and dynamically adjusts the weight of the prior distribution of simulation results based on Kullback-Leibler divergence,which to some extent avoids the real data being"submerged"by the simulation data and achieves the fusion Bayesian estimation of the dispersion center.The experimental results show that when the simulation accuracy is sufficiently high,the proposed method yields a smaller mean-square deviation in estimating the dispersion center and higher shooting accuracy than those of the three comparison methods,which is more conducive to reflecting the effect of the control algorithm and facilitating test personnel to iterate their proposed structures and algorithms.;in addition,this study provides a knowledge base for further comprehensive studies in the future.
基金supported by the National Natural Science Foundation of China(81273184)the National Natural Science Foundation of China Grant for Young Scientists (81302512)
文摘Sample size re-estimation is essential in oncology studies. However, the use of blinded sample size reassessment for survival data has been rarely reported. Based on the density function of the exponential distribution, an expectation-maximization(EM) algorithm of the hazard ratio was derived, and several simulation studies were used to verify its applications. The method had obvious variation in the hazard ratio estimates and overestimation for the relatively small hazard ratios. Our studies showed that the stability of the EM estimation results directly correlated with the sample size, the convergence of the EM algorithm was impacted by the initial values, and a balanced design produced the best estimates. No reliable blinded sample size re-estimation inference can be made in our studies, but the results provide useful information to steer the practitioners in this field from repeating the same endeavor.
文摘BACKGROUND Of 25%of randomised controlled trials(RCTs)on interventions for inflammatory bowel disease(IBD)have no power calculation.AIM To systematically review RCTs reporting interventions for the management of IBD and to produce data for minimum sample sizes that would achieve appropriate power using the actual clinical data.METHODS We included RCTs retrieved from Cochrane IBD specialised Trial register and CENTRAL investigating any form of therapy for either induction or maintenance of remission against control,placebo,or no intervention of IBD in patients of any age.The relevant data was extracted,and the studies were grouped according to the intervention used.We recalculated sample size and the achieved difference,as well as minimum sample sizes needed in the future.RESULTS A total of 105 trials were included.There was a large discrepancy between the estimated figure for the minimal clinically important difference used for the calculations(15%group differences observed vs 30%used for calculation)explaining substantial actual sample size deficits.The minimum sample sizes indicated for future trials based on the 25 years of trial data were calculated and grouped by the intervention.CONCLUSION A third of intervention studies in IBD within the last 25 years are underpowered,with large variations in the calculation of sample sizes.The authors present a sample size estimate resource constructed on the published evidence base for future researchers and key stakeholders within the IBD trial field.
基金supported by the National Natural Science Foundation of China(Grant No.52174062).
文摘Accurate prediction of the internal corrosion rates of oil and gas pipelines could be an effective way to prevent pipeline leaks.In this study,a proposed framework for predicting corrosion rates under a small sample of metal corrosion data in the laboratory was developed to provide a new perspective on how to solve the problem of pipeline corrosion under the condition of insufficient real samples.This approach employed the bagging algorithm to construct a strong learner by integrating several KNN learners.A total of 99 data were collected and split into training and test set with a 9:1 ratio.The training set was used to obtain the best hyperparameters by 10-fold cross-validation and grid search,and the test set was used to determine the performance of the model.The results showed that theMean Absolute Error(MAE)of this framework is 28.06%of the traditional model and outperforms other ensemblemethods.Therefore,the proposed framework is suitable formetal corrosion prediction under small sample conditions.
基金The project is supported by the Aeronautic Science Foundation,China
文摘After finishing 102 replicate constant amplitude crack initiation and growth tests on Ly12-CZ aluminum alloy plate, a statistical investigation of the fatigue crack initiation and growth process is conducted in this paper. According to the post-mortem fractographic examination by scanning electron microscopy (SEM), some qualitative observations of the spacial correlation among fatigue striations are developed to reveal the statistical nature of material intrinsic inhomogeneity during the crack growth process. From the test data, an engineering division between crack initiation and growth is defined as the upper limit of small crack. The distributions of crack initiation life N-i, growth life N, and the statistical characteristics of crack growth rate da/dN are also investigated. It is hoped that the work will provide a solid test basis for the study of probabilistic fatigue, probabilistic fracture mechanics, fatigue reliability and its engineering applications.
文摘Sample size can be a key design feature that not only affects the probability of a trial's success but also determines the duration and feasibility of a trial. If an investigational drug is expected to be effective and address unmet medical needs of an orphan disease, where the accrual period may require many years with a large sample size to detect a minimal clinically relevant treatment effect, a minimum sample size may be set to maintain nominal power. In limited situations such as this, there may be a need for flexibility in the initial and final sample sizes; thus, it is useful to consider the utility of adaptive sample size designs that use sample size re-estimation or group sequential design. In this paper, we propose a new adaptive performance measure to consider the utility of an adaptive sample size design in a trial simulation. Considering that previously proposed sample size re-estimation methods do not take into account errors in estimation based on interim results, we propose Bayesian sample size re-estimation criteria that take into account prior information on treatment effect, and then, we assess its operating characteristics in a simulation study. We also present a review example of sample size re-estimation mainly based on published paper and review report in Pharmaceuticals and Medical Devices Agency (PMDA).
文摘Road surface condition evaluation involves the collection of data over pavement surface for different types of distresses.The exercise consumes a lot of resources if the whole road section length is surveyed and may be prone to errors as a result of surveyors'fatigue.It is therefore important to develop a representative sample to be used when evaluating road condition manually.This study aimed at determining an adequate sample size for section level as well as a way forward for network level condition evaluation of highways in Nepal.Again the study was conducted to quantify the effects of altering the sample unit size for performing a distress survey according to the PCI(pavement condition index)and SDI(surface distress index)method separately for asphalt surfaced roads.The effect of reducing/increasing sample unit size was investigated adopting visual examination through field survey by eight teams in July,2015,along the section of Banepa-Bardibas highway.The PCI was then calculated for each sample unit using standard deduct curves and PCI calculation methodology as per SHRP(Strategic Highway Research Program)recommendations and the computation of SDI was done as per DoR(Department of Roads)guidelines.The results show that 13%sample unit are needed for SDI and 21%for PCI computation,however,the results are out of the significant level.This is higher than DoR and SHRP guidelines.Again no strong relationship is observed between SDI and PCI values.
文摘Sample size determination typically relies on a power analysis based on a frequentist conditional approach. This latter can be seen as a particular case of the two-priors approach, which allows to build four distinct power functions to select the optimal sample size. We revise this approach when the focus is on testing a single binomial proportion. We consider exact methods and introduce a conservative criterion to account for the typical non-monotonic behavior of the power functions, when dealing with discrete data. The main purpose of this paper is to present a Shiny App providing a user-friendly, interactive tool to apply these criteria. The app also provides specific tools to elicit the analysis and the design prior distributions, which are the core of the two-priors approach.
基金supported by the National Key R&D Program of China(No.2022YFD2401301)。
文摘Species distribution patterns is one of the important topics in ecology and biological conservation.Although species distribution models have been intensively used in the research,the effects of spatial associations and spatial dependence have been rarely taken into account in the modeling processes.Recently,Joint Species Distribution Models(JSDMs)offer the opportunity to consider both environmental factors and interspecific relationships as well as the role of spatial structures.This study uses the HMSC(Hierarchical Modelling of Species Communities)framework to model the multispecies distribution of a marine fish assemblage,in which spatial associations and spatial dependence is deliberately accounted for.Three HMSC models were implemented with different structures of random effects to address the existence of spatial associations and spatial dependence,and the predictive performances at different levels of sample sizes were analyzed in the assessment.The results showed that the models with random effects could account for a larger proportion of explainable variance(32.8%),and particularly the spatial random effect model provided the best predictive performances(R_(mean)^(2)=0.31),indicating that spatial random effects could substantially influence the results of the joint species distribution.Increasing sample size had a strong effect(R_(mean)^(2)=0.24-0.31)on the predictive accuracy of the spatially-structured model than on the other models,suggesting that optimal model selection should be dependent on sample size.This study highlights the importance of incorporating spatial random effects for JSDM predictions and suggests that the choice of model structures should consider the data quality across species.
基金supported by the Fundamental Research Funds of the Chinese Academy of Forestry(CAFYBB2020QB004)the National Natural Science Foundation of China(41971038,32171559,U20A2085,and U21A2005).
文摘The sap flow method is widely used to estimate forest transpiration.However,at the individual tree level it has spatiotemporal variations due to the impacts of environmental conditions and spatial relationships among trees.Therefore,an in-depth understanding of the coupling effects of these factors is important for designing sap flow measurement methods and performing accurate assessments of stand scale transpiration.This study is based on observations of sap flux density(SF_(d))of nine sample trees with different Hegyi’s competition indices(HCIs),soil moisture,and meteorological conditions in a pure plantation of Larix gmelinii var.principis-rupprechtii during the 2021 growing season(May to September).A multifactorial model of sap flow was developed and possible errors in the stand scale sap flow estimates associated with sample sizes were determined using model-based predictions of sap flow.Temporal variations are controlled by vapour pressure deficit(VPD),solar radiation(R),and soil moisture,and these relationships can be described by polynomial or saturated exponential functions.Spatial(individual)differences were influenced by the HCI,as shown by the decaying power function.A simple SF_(d)model at the individual tree level was developed to describe the synergistic influences of VPD,R,soil moisture,and HCI.The coefficient of variations(CV)of the sap flow estimates gradually stabilized when the sample size was>10;at least six sample trees were needed if the CV was within 10%.This study improves understanding of the mechanisms of spatiotemporal variations in sap flow at the individual tree level and provides a new methodology for determining the optimal sample size for sap flow measurements.
基金the Ontario Ministry of Agriculture,Food and Rural Affairs,Canada,who supported this project by providing updated soil information on Ontario and Middlesex Countysupported by the Natural Science and Engineering Research Council of Canada(No.RGPIN-2014-4100)。
文摘Conventional soil maps(CSMs)often have multiple soil types within a single polygon,which hinders the ability of machine learning to accurately predict soils.Soil disaggregation approaches are commonly used to improve the spatial and attribute precision of CSMs.The approach disaggregation and harmonization of soil map units through resampled classification trees(DSMART)is popular but computationally intensive,as it generates and assigns synthetic samples to soil series based on the areal coverage information of CSMs.Alternatively,the disaggregation approach pure polygon disaggregation(PPD)assigns soil series based solely on the proportions of soil series in pure polygons in CSMs.This study compared these two disaggregation approaches by applying them to a CSM of Middlesex County,Ontario,Canada.Four different sampling methods were used:two sampling designs,simple random sampling(SRS)and conditional Latin hypercube sampling(cLHS),with two sample sizes(83100 and 19420 samples per sampling plan),both based on an area-weighted approach.Two machine learning algorithms(MLAs),C5.0 decision tree(C5.0)and random forest(RF),were applied to the disaggregation approaches to compare the disaggregation accuracy.The accuracy assessment utilized a set of 500 validation points obtained from the Middlesex County soil survey report.The MLA C5.0(Kappa index=0.58–0.63)showed better performance than RF(Kappa index=0.53–0.54)based on the larger sample size,and PPD with C5.0 based on the larger sample size was the best-performing(Kappa index=0.63)approach.Based on the smaller sample size,both cLHS(Kappa index=0.41–0.48)and SRS(Kappa index=0.40–0.47)produced similar accuracy results.The disaggregation approach PPD exhibited lower processing capacity and time demands(1.62–5.93 h)while yielding maps with lower uncertainty as compared to DSMART(2.75–194.2 h).For CSMs predominantly composed of pure polygons,utilizing PPD for soil series disaggregation is a more efficient and rational choice.However,DSMART is the preferable approach for disaggregating soil series that lack pure polygon representations in the CSMs.