In the new era,the impact of emerging productive forces has permeated every sector of industry.As the core production factor of these forces,data plays a pivotal role in industrial transformation and social developmen...In the new era,the impact of emerging productive forces has permeated every sector of industry.As the core production factor of these forces,data plays a pivotal role in industrial transformation and social development.Consequently,many domestic universities have introduced majors or courses related to big data.Among these,the Big Data Management and Applications major stands out for its interdisciplinary approach and emphasis on practical skills.However,as an emerging field,it has not yet accumulated a robust foundation in teaching theory and practice.Current instructional practices face issues such as unclear training objectives,inconsistent teaching methods and course content,insufficient integration of practical components,and a shortage of qualified faculty-factors that hinder both the development of the major and the overall quality of education.Taking the statistics course within the Big Data Management and Applications major as an example,this paper examines the challenges faced by statistics education in the context of emerging productive forces and proposes corresponding improvement measures.By introducing innovative teaching concepts and strategies,the teaching system for professional courses is optimized,and authentic classroom scenarios are recreated through illustrative examples.Questionnaire surveys and statistical analyses of data collected before and after the teaching reforms indicate that the curriculum changes effectively enhance instructional outcomes,promote the development of the major,and improve the quality of talent cultivation.展开更多
Background Cotton is one of the most important commercial crops after food crops,especially in countries like India,where it’s grown extensively under rainfed conditions.Because of its usage in multiple industries,su...Background Cotton is one of the most important commercial crops after food crops,especially in countries like India,where it’s grown extensively under rainfed conditions.Because of its usage in multiple industries,such as textile,medicine,and automobile industries,it has greater commercial importance.The crop’s performance is greatly influenced by prevailing weather dynamics.As climate changes,assessing how weather changes affect crop performance is essential.Among various techniques that are available,crop models are the most effective and widely used tools for predicting yields.Results This study compares statistical and machine learning models to assess their ability to predict cotton yield across major producing districts of Karnataka,India,utilizing a long-term dataset spanning from 1990 to 2023 that includes yield and weather factors.The artificial neural networks(ANNs)performed superiorly with acceptable yield deviations ranging within±10%during both vegetative stage(F1)and mid stage(F2)for cotton.The model evaluation metrics such as root mean square error(RMSE),normalized root mean square error(nRMSE),and modelling efficiency(EF)were also within the acceptance limits in most districts.Furthermore,the tested ANN model was used to assess the importance of the dominant weather factors influencing crop yield in each district.Specifically,the use of morning relative humidity as an individual parameter and its interaction with maximum and minimum tempera-ture had a major influence on cotton yield in most of the yield predicted districts.These differences highlighted the differential interactions of weather factors in each district for cotton yield formation,highlighting individual response of each weather factor under different soils and management conditions over the major cotton growing districts of Karnataka.Conclusions Compared with statistical models,machine learning models such as ANNs proved higher efficiency in forecasting the cotton yield due to their ability to consider the interactive effects of weather factors on yield forma-tion at different growth stages.This highlights the best suitability of ANNs for yield forecasting in rainfed conditions and for the study on relative impacts of weather factors on yield.Thus,the study aims to provide valuable insights to support stakeholders in planning effective crop management strategies and formulating relevant policies.展开更多
With the implementation of General Senior High School Mathematics Curriculum Standards(2017 Edition,Revised in 2020),probability and statistics,as important carriers of the core mathematical competencies“mathematical...With the implementation of General Senior High School Mathematics Curriculum Standards(2017 Edition,Revised in 2020),probability and statistics,as important carriers of the core mathematical competencies“mathematical modeling”and“data analysis,”have increasingly highlighted their educational value.By summarizing the historical evolution of probability and statistics thinking and combining with teaching practice cases,this study explores its unique role in cultivating students’core mathematical competencies.The research proposes a project-based teaching strategy relying on real scenarios and empowered by technology.Through cases,it demonstrates how to use modern educational technology to realize the whole-process exploration of data collection,model construction,and conclusion verification,so as to promote the transformation of middle school probability and statistics teaching from knowledge imparting to competency development,and provide a practical reference for curriculum reform.展开更多
This paper focuses on the ideological and political construction of the course“Probability Theory and Mathematical Statistics.”Aiming at the current situation in teaching where emphasis is placed on knowledge impart...This paper focuses on the ideological and political construction of the course“Probability Theory and Mathematical Statistics.”Aiming at the current situation in teaching where emphasis is placed on knowledge imparting while value guidance is neglected,and combined with the requirements of ideological and political education policies in the new era,this paper explores the integration path between professional courses and ideological and political education.Through literature analysis,case comparison,and empirical research,the study proposes a systematic implementation plan covering the design of teaching objectives,the reconstruction of teaching content,and the optimization of the evaluation system.The purpose is to cultivate students’sense of social responsibility and innovative awareness by excavating the ideological and political elements in mathematics.The research results provide practical reference for colleges and universities to deepen the reform of ideological and political education in courses,and promote the implementation of the fundamental task of fostering virtue through education in STEM education.展开更多
Variation of reservoir physical properties can cause changes in its elastic parameters. However, this is not a simple linear relation. Furthermore, the lack of observations, data overlap, noise interference, and ideal...Variation of reservoir physical properties can cause changes in its elastic parameters. However, this is not a simple linear relation. Furthermore, the lack of observations, data overlap, noise interference, and idealized models increases the uncertainties of the inversion result. Thus, we propose an inversion method that is different from traditional statistical rock physics modeling. First, we use deterministic and stochastic rock physics models considering the uncertainties of elastic parameters obtained by prestack seismic inversion and introduce weighting coefficients to establish a weighted statistical relation between reservoir and elastic parameters. Second, based on the weighted statistical relation, we use Markov chain Monte Carlo simulations to generate the random joint distribution space of reservoir and elastic parameters that serves as a sample solution space of an objective function. Finally, we propose a fast solution criterion to maximize the posterior probability density and obtain reservoir parameters. The method has high efficiency and application potential.展开更多
Ocean waves are the core environmental elements affecting the movements and structure design of ships. Statistical analysis of wave parameters is the basis for the establishment of long-term ship environmental adaptab...Ocean waves are the core environmental elements affecting the movements and structure design of ships. Statistical analysis of wave parameters is the basis for the establishment of long-term ship environmental adaptability prediction model. The observations from coastal stations, buoys, altimeters and volunteer ships that cover from 1993 to 2011 were interpolated into miller Ion-lat grids by using bilinear method and the analytical fields of ocean waves were given. By using optimal interpolation, the analysis wave fields were assimilated into the WAVEWATCH III (WW3) simulation results. From the assimilated results, the wave rose statistics, the wave height of muitiyear return period and the extreme 2-D wave spectrum are related to the ship seakeeping were calculated. Finally, the wave statistics in China offshore were analyzed in detail.展开更多
In order to reduce the enormous pressure to environmental monitoring work brought by the false sewage monitoring data, Grubbs method, box plot, t test and other methods are used to make depth analysis to the data, pro...In order to reduce the enormous pressure to environmental monitoring work brought by the false sewage monitoring data, Grubbs method, box plot, t test and other methods are used to make depth analysis to the data, providing a set of technological process to identify the sewage monitoring data, which is convenient and simple.展开更多
At present,there is currently a lack of unified standard methods for the determination of antimony content in groundwater in China.The precision and trueness of related detection technologies have not yet been systema...At present,there is currently a lack of unified standard methods for the determination of antimony content in groundwater in China.The precision and trueness of related detection technologies have not yet been systematically and quantitatively evaluated,which limits the effective implementation of environmental monitoring.In response to this key technical gap,this study aimed to establish a standardized method for determining antimony in groundwater using Hydride Generation–Atomic Fluorescence Spectrometry(HG-AFS).Ten laboratories participated in inter-laboratory collaborative tests,and the statistical analysis of the test data was carried out in strict accordance with the technical specifications of GB/T 6379.2—2004 and GB/T 6379.4—2006.The consistency and outliers of the data were tested by Mandel's h and k statistics,the Grubbs test and the Cochran test,and the outliers were removed to optimize the data,thereby significantly improving the reliability and accuracy.Based on the optimized data,parameters such as the repeatability limit(r),reproducibility limit(R),and method bias value(δ)were determined,and the trueness of the method was statistically evaluated.At the same time,precision-function relationships were established,and all results met the requirements.The results show that the lower the antimony content,the lower the repeatability limit(r)and reproducibility limit(R),indicating that the measurement error mainly originates from the detection limit of the method and instrument sensitivity.Therefore,improving the instrument sensitivity and reducing the detection limit are the keys to controlling the analytical error and improving precision.This study provides reliable data support and a solid technical foundation for the establishment and evaluation of standardized methods for the determination of antimony content in groundwater.展开更多
Groundwater modeling remains challenging due to heterogeneity and complexity of aquifer systems,necessitating endeavors to quantify Groundwater Levels(GWL)dynamics to inform policymakers and hydrogeologists.This study...Groundwater modeling remains challenging due to heterogeneity and complexity of aquifer systems,necessitating endeavors to quantify Groundwater Levels(GWL)dynamics to inform policymakers and hydrogeologists.This study introduces a novel Fuzzy Nonlinear Additive Regression(FNAR)model to predict monthly GWL in an unconfined aquifer in eastern Iran,using a 19-year(1998–2017)dataset from 11 piezometric wells.Under three distinct scenarios with progressively increasing input complexity,the study utilized readily available climate data,including Precipitation(Prc),Temperature(Tave),Relative Humidity(RH),and Evapotranspiration(ETo).The dataset was split into training(70%)and validation(30%)subsets.Results showed that among three input scenarios,Scenario 3(Sc3,incorporating all four variables)achieved the best predictive performance,with RMSE ranging from 0.305 m to 0.768 m,MAE from 0.203 m to 0.522 m,NSE from 0.661 to 0.980,and PBIAS from 0.771%to 0.981%,indicating low bias and high reliability.However,Sc2(excluding ETo)with RMSE ranging from 0.4226 m to 0.9909 m,MAE from 0.3418 m to 0.8173 m,NSE from 0.2831 to 0.9674,and PBIAS from−0.598%to 0.968%across different months offers practical advantages in data-scarce settings.The FNAR model outperforms conventional Fuzzy Least Squares Regression(FLSR)and holds promise for GWL forecasting in data-scarce regions where physical or numerical models are impractical.Future research should focus on integrating FNAR with deep learning algorithms and real-time data assimilation expanding applications across diverse hydrogeological settings.展开更多
With the illustration of a specific problem, this paper demonstrates that using Monte Carlo Simulation technology will improve intuitive effect of teaching Probability and Mathematical Statistics course, and save inst...With the illustration of a specific problem, this paper demonstrates that using Monte Carlo Simulation technology will improve intuitive effect of teaching Probability and Mathematical Statistics course, and save instructors' effort as well.And it is estimated that Monte Carlo Simulation technology will be one of the major teaching methods for Probability and Mathematical Statistics course in the future.展开更多
Cancer is a major public health issue in most of countries, including China. Accurate and valid information on cancer incidence, mortality, survival and relevant factors is irreplaceable for cancer prevention and cont...Cancer is a major public health issue in most of countries, including China. Accurate and valid information on cancer incidence, mortality, survival and relevant factors is irreplaceable for cancer prevention and control. Since the national program of cancer registry was launched by the Ministry of Health of China in 2008, the National Central Cancer Registry (NCCR) has been releasing the cancer incidence and mortality based on the data collected from cancer registries supported by the program. The cancer statistics provide current data from registered areas and aims to accurately reflect the cancer burden and epidemic in China. In 2014, the NCCR collected data for calendar year 2011 from 234 registries. After comprehensive quality' evaluation, data from 177 registries have been selected as sources of the reports reflecting cancer incidence and mortaliD, in the registration areas in 2011. These reports are the updated cancer statistics so far, covering much more registries and a hig population.展开更多
This paper is mainly to deal with the problem of direction of arrival(DOA) estimations of multiple narrow-band sources impinging on a uniform linear array under impulsive noise environments. By modeling the impulsive ...This paper is mainly to deal with the problem of direction of arrival(DOA) estimations of multiple narrow-band sources impinging on a uniform linear array under impulsive noise environments. By modeling the impulsive noise as α-stable distribution, new methods which combine the sparse signal representation technique and fractional lower order statistics theory are proposed. In the new algorithms, the fractional lower order statistics vectors of the array output signal are sparsely represented on an overcomplete basis and the DOAs can be effectively estimated by searching the sparsest coefficients. To enhance the robustness performance of the proposed algorithms,the improved algorithms are advanced by eliminating the fractional lower order statistics of the noise from the fractional lower order statistics vector of the array output through a linear transformation. Simulation results have shown the effectiveness of the proposed methods for a wide range of highly impulsive environments.展开更多
This paper focuses on a method to solve structural optimization problems using particle swarm optimization (PSO), surrogate models and Bayesian statistics. PSO is a random/stochastic search algorithm designed to fin...This paper focuses on a method to solve structural optimization problems using particle swarm optimization (PSO), surrogate models and Bayesian statistics. PSO is a random/stochastic search algorithm designed to find the global optimum. However, PSO needs many evaluations compared to gradient-based optimization. This means PSO increases the analysis costs of structural optimization. One of the methods to reduce computing costs in stochastic optimization is to use approximation techniques. In this work, surrogate models are used, including the response surface method (RSM) and Kriging. When surrogate models are used, there are some errors between exact values and approximated values. These errors decrease the reliability of the optimum values and discard the realistic approximation of using surrogate models. In this paper, Bayesian statistics is used to obtain more reliable results. To verify and confirm the efficiency of the proposed method using surrogate models and Bayesian statistics for stochastic structural optimization, two numerical examples are optimized, and the optimization of a hub sleeve is demonstrated as a practical problem.展开更多
The relationship between fractal point pattern modeling and statistical methods of pa- rameter estimation in point-process modeling is reviewed. Statistical estimation of the cluster fractal dimension by using Ripley...The relationship between fractal point pattern modeling and statistical methods of pa- rameter estimation in point-process modeling is reviewed. Statistical estimation of the cluster fractal dimension by using Ripley's K-function has advantages in comparison with the more commonly used methods of box-counting and cluster fractal dimension estimation because it corrects for edge effects, not only for rectangular study areas but also for study areas with curved boundaries determined by re- gional geology. Application of box-counting to estimate the fractal dimension of point patterns has the disadvantage that, in general, it is subject to relatively strong "roll-off" effects for smaller boxes. Point patterns used for example in this paper are mainly for gold deposits in the Abitibi volcanic belt on the Canadian Shield. Additionally, it is proposed that, worldwide, the local point patterns of podiform Cr, volcanogenic massive sulphide and porphyry copper deposits, which are spatially distributed within irregularly shaped favorable tracts, satisfy the fractal clustering model with similar fractal dimensions. The problem of deposit size (metal tonnage) is also considered. Several examples are provided of cases in which the Pareto distribution provides good results for the largest deposits in metal size-frequency distribution modeling.展开更多
文摘In the new era,the impact of emerging productive forces has permeated every sector of industry.As the core production factor of these forces,data plays a pivotal role in industrial transformation and social development.Consequently,many domestic universities have introduced majors or courses related to big data.Among these,the Big Data Management and Applications major stands out for its interdisciplinary approach and emphasis on practical skills.However,as an emerging field,it has not yet accumulated a robust foundation in teaching theory and practice.Current instructional practices face issues such as unclear training objectives,inconsistent teaching methods and course content,insufficient integration of practical components,and a shortage of qualified faculty-factors that hinder both the development of the major and the overall quality of education.Taking the statistics course within the Big Data Management and Applications major as an example,this paper examines the challenges faced by statistics education in the context of emerging productive forces and proposes corresponding improvement measures.By introducing innovative teaching concepts and strategies,the teaching system for professional courses is optimized,and authentic classroom scenarios are recreated through illustrative examples.Questionnaire surveys and statistical analyses of data collected before and after the teaching reforms indicate that the curriculum changes effectively enhance instructional outcomes,promote the development of the major,and improve the quality of talent cultivation.
基金funded through India Meteorological Department,New Delhi,India under the Forecasting Agricultural output using Space,Agrometeorol ogy and Land based observations(FASAL)project and fund number:No.ASC/FASAL/KT-11/01/HQ-2010.
文摘Background Cotton is one of the most important commercial crops after food crops,especially in countries like India,where it’s grown extensively under rainfed conditions.Because of its usage in multiple industries,such as textile,medicine,and automobile industries,it has greater commercial importance.The crop’s performance is greatly influenced by prevailing weather dynamics.As climate changes,assessing how weather changes affect crop performance is essential.Among various techniques that are available,crop models are the most effective and widely used tools for predicting yields.Results This study compares statistical and machine learning models to assess their ability to predict cotton yield across major producing districts of Karnataka,India,utilizing a long-term dataset spanning from 1990 to 2023 that includes yield and weather factors.The artificial neural networks(ANNs)performed superiorly with acceptable yield deviations ranging within±10%during both vegetative stage(F1)and mid stage(F2)for cotton.The model evaluation metrics such as root mean square error(RMSE),normalized root mean square error(nRMSE),and modelling efficiency(EF)were also within the acceptance limits in most districts.Furthermore,the tested ANN model was used to assess the importance of the dominant weather factors influencing crop yield in each district.Specifically,the use of morning relative humidity as an individual parameter and its interaction with maximum and minimum tempera-ture had a major influence on cotton yield in most of the yield predicted districts.These differences highlighted the differential interactions of weather factors in each district for cotton yield formation,highlighting individual response of each weather factor under different soils and management conditions over the major cotton growing districts of Karnataka.Conclusions Compared with statistical models,machine learning models such as ANNs proved higher efficiency in forecasting the cotton yield due to their ability to consider the interactive effects of weather factors on yield forma-tion at different growth stages.This highlights the best suitability of ANNs for yield forecasting in rainfed conditions and for the study on relative impacts of weather factors on yield.Thus,the study aims to provide valuable insights to support stakeholders in planning effective crop management strategies and formulating relevant policies.
基金2021 Annual Research Project of Yili Normal University(2021YSBS012)。
文摘With the implementation of General Senior High School Mathematics Curriculum Standards(2017 Edition,Revised in 2020),probability and statistics,as important carriers of the core mathematical competencies“mathematical modeling”and“data analysis,”have increasingly highlighted their educational value.By summarizing the historical evolution of probability and statistics thinking and combining with teaching practice cases,this study explores its unique role in cultivating students’core mathematical competencies.The research proposes a project-based teaching strategy relying on real scenarios and empowered by technology.Through cases,it demonstrates how to use modern educational technology to realize the whole-process exploration of data collection,model construction,and conclusion verification,so as to promote the transformation of middle school probability and statistics teaching from knowledge imparting to competency development,and provide a practical reference for curriculum reform.
基金Shaanxi Provincial 14th Five-Year Plan for Educational Science Research(SGH24Q481)。
文摘This paper focuses on the ideological and political construction of the course“Probability Theory and Mathematical Statistics.”Aiming at the current situation in teaching where emphasis is placed on knowledge imparting while value guidance is neglected,and combined with the requirements of ideological and political education policies in the new era,this paper explores the integration path between professional courses and ideological and political education.Through literature analysis,case comparison,and empirical research,the study proposes a systematic implementation plan covering the design of teaching objectives,the reconstruction of teaching content,and the optimization of the evaluation system.The purpose is to cultivate students’sense of social responsibility and innovative awareness by excavating the ideological and political elements in mathematics.The research results provide practical reference for colleges and universities to deepen the reform of ideological and political education in courses,and promote the implementation of the fundamental task of fostering virtue through education in STEM education.
基金supported by the National Science and Technology Major Project(No.2011 ZX05007-006)the 973 Program of China(No.2013CB228604)the Major Project of Petrochina(No.2014B-0610)
文摘Variation of reservoir physical properties can cause changes in its elastic parameters. However, this is not a simple linear relation. Furthermore, the lack of observations, data overlap, noise interference, and idealized models increases the uncertainties of the inversion result. Thus, we propose an inversion method that is different from traditional statistical rock physics modeling. First, we use deterministic and stochastic rock physics models considering the uncertainties of elastic parameters obtained by prestack seismic inversion and introduce weighting coefficients to establish a weighted statistical relation between reservoir and elastic parameters. Second, based on the weighted statistical relation, we use Markov chain Monte Carlo simulations to generate the random joint distribution space of reservoir and elastic parameters that serves as a sample solution space of an objective function. Finally, we propose a fast solution criterion to maximize the posterior probability density and obtain reservoir parameters. The method has high efficiency and application potential.
基金supports from National Natural Science Foundation of China (No. 41406032 and No. 41376014)Open Fund of State Key Laboratory of Satellite Ocean Environment Dynamics (No. SOED1305)
文摘Ocean waves are the core environmental elements affecting the movements and structure design of ships. Statistical analysis of wave parameters is the basis for the establishment of long-term ship environmental adaptability prediction model. The observations from coastal stations, buoys, altimeters and volunteer ships that cover from 1993 to 2011 were interpolated into miller Ion-lat grids by using bilinear method and the analytical fields of ocean waves were given. By using optimal interpolation, the analysis wave fields were assimilated into the WAVEWATCH III (WW3) simulation results. From the assimilated results, the wave rose statistics, the wave height of muitiyear return period and the extreme 2-D wave spectrum are related to the ship seakeeping were calculated. Finally, the wave statistics in China offshore were analyzed in detail.
文摘In order to reduce the enormous pressure to environmental monitoring work brought by the false sewage monitoring data, Grubbs method, box plot, t test and other methods are used to make depth analysis to the data, providing a set of technological process to identify the sewage monitoring data, which is convenient and simple.
基金supported by the National Natural Science Foundation of China(Project No.42307555).
文摘At present,there is currently a lack of unified standard methods for the determination of antimony content in groundwater in China.The precision and trueness of related detection technologies have not yet been systematically and quantitatively evaluated,which limits the effective implementation of environmental monitoring.In response to this key technical gap,this study aimed to establish a standardized method for determining antimony in groundwater using Hydride Generation–Atomic Fluorescence Spectrometry(HG-AFS).Ten laboratories participated in inter-laboratory collaborative tests,and the statistical analysis of the test data was carried out in strict accordance with the technical specifications of GB/T 6379.2—2004 and GB/T 6379.4—2006.The consistency and outliers of the data were tested by Mandel's h and k statistics,the Grubbs test and the Cochran test,and the outliers were removed to optimize the data,thereby significantly improving the reliability and accuracy.Based on the optimized data,parameters such as the repeatability limit(r),reproducibility limit(R),and method bias value(δ)were determined,and the trueness of the method was statistically evaluated.At the same time,precision-function relationships were established,and all results met the requirements.The results show that the lower the antimony content,the lower the repeatability limit(r)and reproducibility limit(R),indicating that the measurement error mainly originates from the detection limit of the method and instrument sensitivity.Therefore,improving the instrument sensitivity and reducing the detection limit are the keys to controlling the analytical error and improving precision.This study provides reliable data support and a solid technical foundation for the establishment and evaluation of standardized methods for the determination of antimony content in groundwater.
基金supported by the Iran National Science Foundation(INSF)the University of Birjand under grant number 4034771.
文摘Groundwater modeling remains challenging due to heterogeneity and complexity of aquifer systems,necessitating endeavors to quantify Groundwater Levels(GWL)dynamics to inform policymakers and hydrogeologists.This study introduces a novel Fuzzy Nonlinear Additive Regression(FNAR)model to predict monthly GWL in an unconfined aquifer in eastern Iran,using a 19-year(1998–2017)dataset from 11 piezometric wells.Under three distinct scenarios with progressively increasing input complexity,the study utilized readily available climate data,including Precipitation(Prc),Temperature(Tave),Relative Humidity(RH),and Evapotranspiration(ETo).The dataset was split into training(70%)and validation(30%)subsets.Results showed that among three input scenarios,Scenario 3(Sc3,incorporating all four variables)achieved the best predictive performance,with RMSE ranging from 0.305 m to 0.768 m,MAE from 0.203 m to 0.522 m,NSE from 0.661 to 0.980,and PBIAS from 0.771%to 0.981%,indicating low bias and high reliability.However,Sc2(excluding ETo)with RMSE ranging from 0.4226 m to 0.9909 m,MAE from 0.3418 m to 0.8173 m,NSE from 0.2831 to 0.9674,and PBIAS from−0.598%to 0.968%across different months offers practical advantages in data-scarce settings.The FNAR model outperforms conventional Fuzzy Least Squares Regression(FLSR)and holds promise for GWL forecasting in data-scarce regions where physical or numerical models are impractical.Future research should focus on integrating FNAR with deep learning algorithms and real-time data assimilation expanding applications across diverse hydrogeological settings.
文摘With the illustration of a specific problem, this paper demonstrates that using Monte Carlo Simulation technology will improve intuitive effect of teaching Probability and Mathematical Statistics course, and save instructors' effort as well.And it is estimated that Monte Carlo Simulation technology will be one of the major teaching methods for Probability and Mathematical Statistics course in the future.
文摘Cancer is a major public health issue in most of countries, including China. Accurate and valid information on cancer incidence, mortality, survival and relevant factors is irreplaceable for cancer prevention and control. Since the national program of cancer registry was launched by the Ministry of Health of China in 2008, the National Central Cancer Registry (NCCR) has been releasing the cancer incidence and mortality based on the data collected from cancer registries supported by the program. The cancer statistics provide current data from registered areas and aims to accurately reflect the cancer burden and epidemic in China. In 2014, the NCCR collected data for calendar year 2011 from 234 registries. After comprehensive quality' evaluation, data from 177 registries have been selected as sources of the reports reflecting cancer incidence and mortaliD, in the registration areas in 2011. These reports are the updated cancer statistics so far, covering much more registries and a hig population.
基金supported in part by the National Natural Science Foundation of China(61301228,61371091)the Fundamental Research Funds for the Central Universities(3132014212)
文摘This paper is mainly to deal with the problem of direction of arrival(DOA) estimations of multiple narrow-band sources impinging on a uniform linear array under impulsive noise environments. By modeling the impulsive noise as α-stable distribution, new methods which combine the sparse signal representation technique and fractional lower order statistics theory are proposed. In the new algorithms, the fractional lower order statistics vectors of the array output signal are sparsely represented on an overcomplete basis and the DOAs can be effectively estimated by searching the sparsest coefficients. To enhance the robustness performance of the proposed algorithms,the improved algorithms are advanced by eliminating the fractional lower order statistics of the noise from the fractional lower order statistics vector of the array output through a linear transformation. Simulation results have shown the effectiveness of the proposed methods for a wide range of highly impulsive environments.
文摘This paper focuses on a method to solve structural optimization problems using particle swarm optimization (PSO), surrogate models and Bayesian statistics. PSO is a random/stochastic search algorithm designed to find the global optimum. However, PSO needs many evaluations compared to gradient-based optimization. This means PSO increases the analysis costs of structural optimization. One of the methods to reduce computing costs in stochastic optimization is to use approximation techniques. In this work, surrogate models are used, including the response surface method (RSM) and Kriging. When surrogate models are used, there are some errors between exact values and approximated values. These errors decrease the reliability of the optimum values and discard the realistic approximation of using surrogate models. In this paper, Bayesian statistics is used to obtain more reliable results. To verify and confirm the efficiency of the proposed method using surrogate models and Bayesian statistics for stochastic structural optimization, two numerical examples are optimized, and the optimization of a hub sleeve is demonstrated as a practical problem.
基金supported by Geological Survey of Canada and China University of Geosciences (Wuhan)
文摘The relationship between fractal point pattern modeling and statistical methods of pa- rameter estimation in point-process modeling is reviewed. Statistical estimation of the cluster fractal dimension by using Ripley's K-function has advantages in comparison with the more commonly used methods of box-counting and cluster fractal dimension estimation because it corrects for edge effects, not only for rectangular study areas but also for study areas with curved boundaries determined by re- gional geology. Application of box-counting to estimate the fractal dimension of point patterns has the disadvantage that, in general, it is subject to relatively strong "roll-off" effects for smaller boxes. Point patterns used for example in this paper are mainly for gold deposits in the Abitibi volcanic belt on the Canadian Shield. Additionally, it is proposed that, worldwide, the local point patterns of podiform Cr, volcanogenic massive sulphide and porphyry copper deposits, which are spatially distributed within irregularly shaped favorable tracts, satisfy the fractal clustering model with similar fractal dimensions. The problem of deposit size (metal tonnage) is also considered. Several examples are provided of cases in which the Pareto distribution provides good results for the largest deposits in metal size-frequency distribution modeling.