Various random models with balanced data that are relevant for analyzing practical test data are described, along with several hypothesis testing and interval estimation problems concerning variance components. In thi...Various random models with balanced data that are relevant for analyzing practical test data are described, along with several hypothesis testing and interval estimation problems concerning variance components. In this paper, we mainly consider these problems in general random effect model with balanced data. Exact tests and confidence intervals for a single variance component corresponding to random effect are developed by using generalized p-values and generalized confidence intervals. The resulting procedures are easy to compute and are applicable to small samples. Exact tests and confidence intervals are also established for comparing the random-effects variance components and the sum of random-effects variance components in two independent general random effect models with balanced data. Furthermore, we investigate the statistical properties of the resulting tests. Finally, some simulation results on the type Ⅰ error probability and power of the proposed test are reported. The simulation results indicate that exact test is extremely satisfactory for controlling type Ⅰ error probability.展开更多
The one-sided and two-sided hypotheses about the parametric component in partially linear model are considered in this paper. Generalized p-values are proposed based on fiducial method for testing the two hypotheses a...The one-sided and two-sided hypotheses about the parametric component in partially linear model are considered in this paper. Generalized p-values are proposed based on fiducial method for testing the two hypotheses at the presence of nonparametric nuisance parameter. Note that the nonparametric component can be approximated by a linear combination of some known functions, thus, the partially linear model can be approximated by a linear model. Thereby, generalized p-values for a linear model are studied first, and then the results are extended to the situation of partially linear model. Small sample frequency properties are analyzed theoretically. Meanwhile, simulations are conducted to assess the finite sample performance of the tests based on the proposed p-values.展开更多
We present here an alternative definition of the P-value for statistical hypothesis test of a real-valued parameter for a continuous random variable X. Our approach uses neither the notion of Type I error nor the assu...We present here an alternative definition of the P-value for statistical hypothesis test of a real-valued parameter for a continuous random variable X. Our approach uses neither the notion of Type I error nor the assumption that null hypothesis is true. Instead, the new P-value involves the maximum likelihood estimator, which is usually available for a parameter such as the mean μ or standard deviation σ of a random variable X with a common distribution.展开更多
Nirmal et al.presented a machine learning-based design of ternary organic solar cells,utilizing feature importance[1].This paper highlights the alarming potential biases in the use of feature importance in machine lea...Nirmal et al.presented a machine learning-based design of ternary organic solar cells,utilizing feature importance[1].This paper highlights the alarming potential biases in the use of feature importance in machine learning,which can lead to incorrect conclusions and outcomes.Many scientists and researchers including Nirmal et al.are unaware that feature importances in machine learning in general are model-specific and do not necessarily represent true associations between the target and features.展开更多
SUMMARY The p value has been widely used as a way to summarise the significance in data analysis. However, misuse and misinterpretation of the p value is common in practice. Our result shows that if the model specific...SUMMARY The p value has been widely used as a way to summarise the significance in data analysis. However, misuse and misinterpretation of the p value is common in practice. Our result shows that if the model specification is wrong, the distribution of the p value may be inappropriate, which makes the decision based on the p value invalid.展开更多
For earthquakes (M≥4.0) occurring along and around the East Anatolian fault zone and the Dead Sea fault zone within ten years immediately before the MW7.8 Gaziantep earthquake,Türkiye,of February 6,2023,we explo...For earthquakes (M≥4.0) occurring along and around the East Anatolian fault zone and the Dead Sea fault zone within ten years immediately before the MW7.8 Gaziantep earthquake,Türkiye,of February 6,2023,we explored the correlation between seismicity and the earth's rotation.We statistically evaluated the correlation using the Schuster's test.The results are quantitatively assessed by a p-value.We found a clear downward trend in the p-values from early 2020 to late 2022 in the studied region.We also obtained a spatial distribution of the p-values showing a low p-value area near the northeastern end of the aftershock zone.Although the stress induced by the rotation of the earth is very weak,it could control the earthquake occurrence when the focal medium is loaded to the critical state to release a large earthquake.The decrease in the b-value in the Gutenberg-Richter (G-R) relation is considered in the form of the tectonic stress increase in the crust.We investigated the b-value as a function of time in the study region.We found that the b-value had decreased for about eleven years before the p-value started to decrease,with a relative reduction of 57%.Therefore,the result of the lower p-values obtained in the present study infers that the earthquakes were dominated by the earth's rotation prior to the MW7.8 Türkiye earthquake due to a critical state of the focal region.展开更多
Combining p-values is a well-known issue in statistical inference. When faced with a study involving m p-values, determining how to effectively combine them to arrive at a comprehensive and reliable conclusion becomes...Combining p-values is a well-known issue in statistical inference. When faced with a study involving m p-values, determining how to effectively combine them to arrive at a comprehensive and reliable conclusion becomes a significant concern in various fields, including genetics, genomics, and economics, among others. The literature offers a range of combination strategies tailored to different research objectives and data characteristics. In this work, we aim to provide users with a systematic exploration of the p-value combination problem. We present theoretical results for combining p-values using a logarithmic transformation, which highlights the benefits of this approach. Additionally, we propose a combination strategy together with its statistical properties utilizing the gold section method, showcasing its performance through extensive computer simulations. To further illustrate its effectiveness, we apply this approach to a real-world scenario.展开更多
We describe here a comprehensive framework for intelligent information management (IIM) of data collection and decision-making actions for reliable and robust event processing and recognition. This is driven by algori...We describe here a comprehensive framework for intelligent information management (IIM) of data collection and decision-making actions for reliable and robust event processing and recognition. This is driven by algorithmic information theory (AIT), in general, and algorithmic randomness and Kolmogorov complexity (KC), in particular. The processing and recognition tasks addressed include data discrimination and multilayer open set data categorization, change detection, data aggregation, clustering and data segmentation, data selection and link analysis, data cleaning and data revision, and prediction and identification of critical states. The unifying theme throughout the paper is that of “compression entails comprehension”, which is realized using the interrelated concepts of randomness vs. regularity and Kolmogorov complexity. The constructive and all encompassing active learning (AL) methodology, which mediates and supports the above theme, is context-driven and takes advantage of statistical learning, in general, and semi-supervised learning and transduction, in particular. Active learning employs explore and exploit actions characteristic of closed-loop control for evidence accumulation in order to revise its prediction models and to reduce uncertainty. The set-based similarity scores, driven by algorithmic randomness and Kolmogorov complexity, employ strangeness / typicality and p-values. We propose the application of the IIM framework to critical states prediction for complex physical systems;in particular, the prediction of cyclone genesis and intensification.展开更多
We advance here a novel methodology for robust intelligent biometric information management with inferences and predictions made using randomness and complexity concepts. Intelligence refers to learning, adap- tation,...We advance here a novel methodology for robust intelligent biometric information management with inferences and predictions made using randomness and complexity concepts. Intelligence refers to learning, adap- tation, and functionality, and robustness refers to the ability to handle incomplete and/or corrupt adversarial information, on one side, and image and or device variability, on the other side. The proposed methodology is model-free and non-parametric. It draws support from discriminative methods using likelihood ratios to link at the conceptual level biometrics and forensics. It further links, at the modeling and implementation level, the Bayesian framework, statistical learning theory (SLT) using transduction and semi-supervised lea- rning, and Information Theory (IY) using mutual information. The key concepts supporting the proposed methodology are a) local estimation to facilitate learning and prediction using both labeled and unlabeled data;b) similarity metrics using regularity of patterns, randomness deficiency, and Kolmogorov complexity (similar to MDL) using strangeness/typicality and ranking p-values;and c) the Cover – Hart theorem on the asymptotical performance of k-nearest neighbors approaching the optimal Bayes error. Several topics on biometric inference and prediction related to 1) multi-level and multi-layer data fusion including quality and multi-modal biometrics;2) score normalization and revision theory;3) face selection and tracking;and 4) identity management, are described here using an integrated approach that includes transduction and boosting for ranking and sequential fusion/aggregation, respectively, on one side, and active learning and change/ outlier/intrusion detection realized using information gain and martingale, respectively, on the other side. The methodology proposed can be mapped to additional types of information beyond biometrics.展开更多
Someone or the other is always pointing to a published study to justify a point of view or the need for a change in what we do or how we live. There are so many such studies, many reported in top-notch journals, repor...Someone or the other is always pointing to a published study to justify a point of view or the need for a change in what we do or how we live. There are so many such studies, many reported in top-notch journals, reporting results inconsistent across and often inconsistent within. It is in the interest of increasing the credibility of science, and to safeguard the general public living with its overt and covert influence, to filter good science from bad. Some inferences are good, even when counter-intuitive or seemingly inconsistent, and are likely to withstand scrutiny and some others may represent marginal effects in the aggregate not entirely useful for individual choices or decisions, and are often non-reproducible. The New York Times featured an article in August 2018 debunking some of the reported studies supporting testing for Vitamin D deficiencies and the recommendation of large supplemental doses of Vitamin D. Some of these Vitamin D claims, among other claims, were reported as not holding up on replication in controlled trials [1]. We have noted in Ref. [2] that we need to be wary as individuals about reported signals detected in studies using stochastic data, even when these aggregate signals are of a large magnitude. We demonstrated discordance rates of 30% or higher between subject level assessments of effect and the conclusion drawn in the aggregate. Here we will provide a computation of this discordant proportion as well as post-hoc assessments of aggregate inferences, with emphasis on evaluating studies with time-to-event endpoints such as those in cancer trials. Similar evaluations for continuous, binomial data and correlations are also provided. We also discuss the use of response thresholds.展开更多
We start with a description of the statistical inferential framework and the duality between observed data and the true state of nature that underlies it. We demonstrate here that the usual testing of dueling hypothes...We start with a description of the statistical inferential framework and the duality between observed data and the true state of nature that underlies it. We demonstrate here that the usual testing of dueling hypotheses and the acceptance of one and the rejection of the other is a framework which can often be faulty when such inferences are applied to individual subjects. This follows from noting that the statistical inferential framework is predominantly based on conclusions drawn for aggregates and noting that what is true in the aggregate frequently does not hold for individuals, an ecological fallacy. Such a fallacy is usually seen as problematic when each data record represents aggregate statistics for counties or districts and not data for individuals. Here we demonstrate strong ecological fallacies even when using subject data. Inverted simulations, of trials rightly sized to detect meaningful differences, yielding a statistically significant p-value of 0.000001 (1 in a million) and associated with clinically meaningful differences between a hypothetical new therapy and a standard therapy, had a proportion of instances of subjects with standard therapy effect better than new therapy effects close to 30%. A ―winner take all‖ choice between two hypotheses may not be supported by statistically significant differences based on stochastic data. We also argue the incorrectness across many individuals of other summaries such as correlations, density estimates, standard deviations and predictions based on machine learning models. Despite artifacts we support the use of prospective clinical trials and careful unbiased model building as necessary first steps. In health care, high touch personalized care based on patient level data will remain relevant even as we adopt more high tech data-intensive personalized therapeutic strategies based on aggregates.展开更多
An AR(1) model with ARCH(1) error structure is known as the first-order double autoregressive (DAR(1)) model. In this paper, a conditional likelihood based method is proposed to obtain inference for the two scalar par...An AR(1) model with ARCH(1) error structure is known as the first-order double autoregressive (DAR(1)) model. In this paper, a conditional likelihood based method is proposed to obtain inference for the two scalar parameters of interest of the DAR(1) model. Theoretically, the proposed method has rate of convergence O(n-3/2). Applying the proposed method to a real-life data set shows that the results obtained by the proposed method can be quite different from the results obtained by the existing methods. Results from Monte Carlo simulation studies illustrate the supreme accuracy of the proposed method even when the sample size is small.展开更多
Today,coronavirus appears as a serious challenge to the whole world.Epidemiological data of coronavirus is collected through media and web sources for the purpose of analysis.New data on COVID-19 are available daily,y...Today,coronavirus appears as a serious challenge to the whole world.Epidemiological data of coronavirus is collected through media and web sources for the purpose of analysis.New data on COVID-19 are available daily,yet information about the biological aspects of SARS-CoV-2 and epidemiological characteristics of COVID-19 remains limited,and uncertainty remains around nearly all its parameters’values.This research provides the scientic and public health communities better resources,knowledge,and tools to improve their ability to control the infectious diseases.Using the publicly available data on the ongoing pandemic,the present study investigates the incubation period and other time intervals that govern the epidemiological dynamics of the COVID-19 infections.Formulation of the testing hypotheses for different countries with a 95%level of condence,and descriptive statistics have been calculated to analyze in which region will COVID-19 fall according to the tested hypothesized mean of different countries.The results will be helpful in decision making as well as in further mathematical analysis and control strategy.Statistical tools are used to investigate this pandemic,which will be useful for further research.The testing of the hypothesis is done for the differences in various effects including standard errors.Changes in states’variables are observed over time.The rapid outbreak of coronavirus can be stopped by reducing its transmission.Susceptible should maintain safe distance and follow precautionary measures regarding COVID-19 transmission.展开更多
In this study,we investigate how a stress variation generated by a fault that experiences transient postseismic slip(TPS)affects the rate of aftershocks.First,we show that the postseismic slip from Rubin-Ampuero model...In this study,we investigate how a stress variation generated by a fault that experiences transient postseismic slip(TPS)affects the rate of aftershocks.First,we show that the postseismic slip from Rubin-Ampuero model is a TPS that can occur on the main fault with a velocity-weakening frictional motion,that the resultant slip function is similar to the generalized Jeffreys-Lomnitz creep law,and that the TPS can be explained by a continuous creep process undergoing reloading.Second,we obtain an approximate solution based on the Helmstetter-Shaw seismicity model relating the rate of aftershocks to such TPS.For the Wenchuan sequence,we perform a numerical fitting of the cumulative number of aftershocks using the Modified Omori Law(MOL),the Dieterich model,and the specific TPS model.The fitting curves indicate that the data can be better explained by the TPS model with a B/A ratio of approximately 1.12,where A and B are the parameters in the rate-and state-dependent friction law respectively.Moreover,the p and c that appear in the MOL can be interpreted by the B/A and the critical slip distance,respectively.Because the B/A ratio in the current model is always larger than 1,the model could become a possible candidate to explain aftershock rate commonly decay as a power law with a p-value larger than 1.Finally,the influence of the background seismicity rate r on parameters is studied;the results show that except for the apparent aftershock duration,other parameters are insensitive to r.展开更多
文摘Various random models with balanced data that are relevant for analyzing practical test data are described, along with several hypothesis testing and interval estimation problems concerning variance components. In this paper, we mainly consider these problems in general random effect model with balanced data. Exact tests and confidence intervals for a single variance component corresponding to random effect are developed by using generalized p-values and generalized confidence intervals. The resulting procedures are easy to compute and are applicable to small samples. Exact tests and confidence intervals are also established for comparing the random-effects variance components and the sum of random-effects variance components in two independent general random effect models with balanced data. Furthermore, we investigate the statistical properties of the resulting tests. Finally, some simulation results on the type Ⅰ error probability and power of the proposed test are reported. The simulation results indicate that exact test is extremely satisfactory for controlling type Ⅰ error probability.
基金This research is supported by the National Natural Science Foundation of China under Grant No. 10771015 and the Start-Up Funds for Doctoral Scientific Research of Shandong University of Finance.
文摘The one-sided and two-sided hypotheses about the parametric component in partially linear model are considered in this paper. Generalized p-values are proposed based on fiducial method for testing the two hypotheses at the presence of nonparametric nuisance parameter. Note that the nonparametric component can be approximated by a linear combination of some known functions, thus, the partially linear model can be approximated by a linear model. Thereby, generalized p-values for a linear model are studied first, and then the results are extended to the situation of partially linear model. Small sample frequency properties are analyzed theoretically. Meanwhile, simulations are conducted to assess the finite sample performance of the tests based on the proposed p-values.
文摘We present here an alternative definition of the P-value for statistical hypothesis test of a real-valued parameter for a continuous random variable X. Our approach uses neither the notion of Type I error nor the assumption that null hypothesis is true. Instead, the new P-value involves the maximum likelihood estimator, which is usually available for a parameter such as the mean μ or standard deviation σ of a random variable X with a common distribution.
文摘Nirmal et al.presented a machine learning-based design of ternary organic solar cells,utilizing feature importance[1].This paper highlights the alarming potential biases in the use of feature importance in machine learning,which can lead to incorrect conclusions and outcomes.Many scientists and researchers including Nirmal et al.are unaware that feature importances in machine learning in general are model-specific and do not necessarily represent true associations between the target and features.
文摘SUMMARY The p value has been widely used as a way to summarise the significance in data analysis. However, misuse and misinterpretation of the p value is common in practice. Our result shows that if the model specification is wrong, the distribution of the p value may be inappropriate, which makes the decision based on the p value invalid.
基金supported by the China National Key Research and Development Program(2022YFF0800601)the Special fund of the Institute of Geophysics,China Earthquake Administration (DQJB23Z09)。
文摘For earthquakes (M≥4.0) occurring along and around the East Anatolian fault zone and the Dead Sea fault zone within ten years immediately before the MW7.8 Gaziantep earthquake,Türkiye,of February 6,2023,we explored the correlation between seismicity and the earth's rotation.We statistically evaluated the correlation using the Schuster's test.The results are quantitatively assessed by a p-value.We found a clear downward trend in the p-values from early 2020 to late 2022 in the studied region.We also obtained a spatial distribution of the p-values showing a low p-value area near the northeastern end of the aftershock zone.Although the stress induced by the rotation of the earth is very weak,it could control the earthquake occurrence when the focal medium is loaded to the critical state to release a large earthquake.The decrease in the b-value in the Gutenberg-Richter (G-R) relation is considered in the form of the tectonic stress increase in the crust.We investigated the b-value as a function of time in the study region.We found that the b-value had decreased for about eleven years before the p-value started to decrease,with a relative reduction of 57%.Therefore,the result of the lower p-values obtained in the present study infers that the earthquakes were dominated by the earth's rotation prior to the MW7.8 Türkiye earthquake due to a critical state of the focal region.
基金Supported by the CAS Project for Young Scientists in Basic Research(Grant No.YSBR-034)National Nature Science Foundation of China(Grant Nos.12325110,12201432)。
文摘Combining p-values is a well-known issue in statistical inference. When faced with a study involving m p-values, determining how to effectively combine them to arrive at a comprehensive and reliable conclusion becomes a significant concern in various fields, including genetics, genomics, and economics, among others. The literature offers a range of combination strategies tailored to different research objectives and data characteristics. In this work, we aim to provide users with a systematic exploration of the p-value combination problem. We present theoretical results for combining p-values using a logarithmic transformation, which highlights the benefits of this approach. Additionally, we propose a combination strategy together with its statistical properties utilizing the gold section method, showcasing its performance through extensive computer simulations. To further illustrate its effectiveness, we apply this approach to a real-world scenario.
文摘We describe here a comprehensive framework for intelligent information management (IIM) of data collection and decision-making actions for reliable and robust event processing and recognition. This is driven by algorithmic information theory (AIT), in general, and algorithmic randomness and Kolmogorov complexity (KC), in particular. The processing and recognition tasks addressed include data discrimination and multilayer open set data categorization, change detection, data aggregation, clustering and data segmentation, data selection and link analysis, data cleaning and data revision, and prediction and identification of critical states. The unifying theme throughout the paper is that of “compression entails comprehension”, which is realized using the interrelated concepts of randomness vs. regularity and Kolmogorov complexity. The constructive and all encompassing active learning (AL) methodology, which mediates and supports the above theme, is context-driven and takes advantage of statistical learning, in general, and semi-supervised learning and transduction, in particular. Active learning employs explore and exploit actions characteristic of closed-loop control for evidence accumulation in order to revise its prediction models and to reduce uncertainty. The set-based similarity scores, driven by algorithmic randomness and Kolmogorov complexity, employ strangeness / typicality and p-values. We propose the application of the IIM framework to critical states prediction for complex physical systems;in particular, the prediction of cyclone genesis and intensification.
文摘We advance here a novel methodology for robust intelligent biometric information management with inferences and predictions made using randomness and complexity concepts. Intelligence refers to learning, adap- tation, and functionality, and robustness refers to the ability to handle incomplete and/or corrupt adversarial information, on one side, and image and or device variability, on the other side. The proposed methodology is model-free and non-parametric. It draws support from discriminative methods using likelihood ratios to link at the conceptual level biometrics and forensics. It further links, at the modeling and implementation level, the Bayesian framework, statistical learning theory (SLT) using transduction and semi-supervised lea- rning, and Information Theory (IY) using mutual information. The key concepts supporting the proposed methodology are a) local estimation to facilitate learning and prediction using both labeled and unlabeled data;b) similarity metrics using regularity of patterns, randomness deficiency, and Kolmogorov complexity (similar to MDL) using strangeness/typicality and ranking p-values;and c) the Cover – Hart theorem on the asymptotical performance of k-nearest neighbors approaching the optimal Bayes error. Several topics on biometric inference and prediction related to 1) multi-level and multi-layer data fusion including quality and multi-modal biometrics;2) score normalization and revision theory;3) face selection and tracking;and 4) identity management, are described here using an integrated approach that includes transduction and boosting for ranking and sequential fusion/aggregation, respectively, on one side, and active learning and change/ outlier/intrusion detection realized using information gain and martingale, respectively, on the other side. The methodology proposed can be mapped to additional types of information beyond biometrics.
文摘Someone or the other is always pointing to a published study to justify a point of view or the need for a change in what we do or how we live. There are so many such studies, many reported in top-notch journals, reporting results inconsistent across and often inconsistent within. It is in the interest of increasing the credibility of science, and to safeguard the general public living with its overt and covert influence, to filter good science from bad. Some inferences are good, even when counter-intuitive or seemingly inconsistent, and are likely to withstand scrutiny and some others may represent marginal effects in the aggregate not entirely useful for individual choices or decisions, and are often non-reproducible. The New York Times featured an article in August 2018 debunking some of the reported studies supporting testing for Vitamin D deficiencies and the recommendation of large supplemental doses of Vitamin D. Some of these Vitamin D claims, among other claims, were reported as not holding up on replication in controlled trials [1]. We have noted in Ref. [2] that we need to be wary as individuals about reported signals detected in studies using stochastic data, even when these aggregate signals are of a large magnitude. We demonstrated discordance rates of 30% or higher between subject level assessments of effect and the conclusion drawn in the aggregate. Here we will provide a computation of this discordant proportion as well as post-hoc assessments of aggregate inferences, with emphasis on evaluating studies with time-to-event endpoints such as those in cancer trials. Similar evaluations for continuous, binomial data and correlations are also provided. We also discuss the use of response thresholds.
文摘We start with a description of the statistical inferential framework and the duality between observed data and the true state of nature that underlies it. We demonstrate here that the usual testing of dueling hypotheses and the acceptance of one and the rejection of the other is a framework which can often be faulty when such inferences are applied to individual subjects. This follows from noting that the statistical inferential framework is predominantly based on conclusions drawn for aggregates and noting that what is true in the aggregate frequently does not hold for individuals, an ecological fallacy. Such a fallacy is usually seen as problematic when each data record represents aggregate statistics for counties or districts and not data for individuals. Here we demonstrate strong ecological fallacies even when using subject data. Inverted simulations, of trials rightly sized to detect meaningful differences, yielding a statistically significant p-value of 0.000001 (1 in a million) and associated with clinically meaningful differences between a hypothetical new therapy and a standard therapy, had a proportion of instances of subjects with standard therapy effect better than new therapy effects close to 30%. A ―winner take all‖ choice between two hypotheses may not be supported by statistically significant differences based on stochastic data. We also argue the incorrectness across many individuals of other summaries such as correlations, density estimates, standard deviations and predictions based on machine learning models. Despite artifacts we support the use of prospective clinical trials and careful unbiased model building as necessary first steps. In health care, high touch personalized care based on patient level data will remain relevant even as we adopt more high tech data-intensive personalized therapeutic strategies based on aggregates.
文摘An AR(1) model with ARCH(1) error structure is known as the first-order double autoregressive (DAR(1)) model. In this paper, a conditional likelihood based method is proposed to obtain inference for the two scalar parameters of interest of the DAR(1) model. Theoretically, the proposed method has rate of convergence O(n-3/2). Applying the proposed method to a real-life data set shows that the results obtained by the proposed method can be quite different from the results obtained by the existing methods. Results from Monte Carlo simulation studies illustrate the supreme accuracy of the proposed method even when the sample size is small.
文摘Today,coronavirus appears as a serious challenge to the whole world.Epidemiological data of coronavirus is collected through media and web sources for the purpose of analysis.New data on COVID-19 are available daily,yet information about the biological aspects of SARS-CoV-2 and epidemiological characteristics of COVID-19 remains limited,and uncertainty remains around nearly all its parameters’values.This research provides the scientic and public health communities better resources,knowledge,and tools to improve their ability to control the infectious diseases.Using the publicly available data on the ongoing pandemic,the present study investigates the incubation period and other time intervals that govern the epidemiological dynamics of the COVID-19 infections.Formulation of the testing hypotheses for different countries with a 95%level of condence,and descriptive statistics have been calculated to analyze in which region will COVID-19 fall according to the tested hypothesized mean of different countries.The results will be helpful in decision making as well as in further mathematical analysis and control strategy.Statistical tools are used to investigate this pandemic,which will be useful for further research.The testing of the hypothesis is done for the differences in various effects including standard errors.Changes in states’variables are observed over time.The rapid outbreak of coronavirus can be stopped by reducing its transmission.Susceptible should maintain safe distance and follow precautionary measures regarding COVID-19 transmission.
基金supported by the National Natural Science Foundation of China (Nos.41974068 and 41574040)Key International S&T Cooperation Project of P.R.China (No.2015DFA21260)。
文摘In this study,we investigate how a stress variation generated by a fault that experiences transient postseismic slip(TPS)affects the rate of aftershocks.First,we show that the postseismic slip from Rubin-Ampuero model is a TPS that can occur on the main fault with a velocity-weakening frictional motion,that the resultant slip function is similar to the generalized Jeffreys-Lomnitz creep law,and that the TPS can be explained by a continuous creep process undergoing reloading.Second,we obtain an approximate solution based on the Helmstetter-Shaw seismicity model relating the rate of aftershocks to such TPS.For the Wenchuan sequence,we perform a numerical fitting of the cumulative number of aftershocks using the Modified Omori Law(MOL),the Dieterich model,and the specific TPS model.The fitting curves indicate that the data can be better explained by the TPS model with a B/A ratio of approximately 1.12,where A and B are the parameters in the rate-and state-dependent friction law respectively.Moreover,the p and c that appear in the MOL can be interpreted by the B/A and the critical slip distance,respectively.Because the B/A ratio in the current model is always larger than 1,the model could become a possible candidate to explain aftershock rate commonly decay as a power law with a p-value larger than 1.Finally,the influence of the background seismicity rate r on parameters is studied;the results show that except for the apparent aftershock duration,other parameters are insensitive to r.