In this work, some non-homogeneous Poisson models are considered to study the behaviour of ozone in the city of Puebla, Mexico. Several functions are used as the rate function for the non-homogeneous Poisson process. ...In this work, some non-homogeneous Poisson models are considered to study the behaviour of ozone in the city of Puebla, Mexico. Several functions are used as the rate function for the non-homogeneous Poisson process. In addition to their dependence on time, these rate functions also depend on some parameters that need to be estimated. In order to estimate them, a Bayesian approach will be taken. The expressions for the distributions of the parameters involved in the models are very complex. Therefore, Markov chain Monte Carlo algorithms are used to estimate them. The methodology is applied to the ozone data from the city of Puebla, Mexico.展开更多
We consider some non-homogeneous Poisson models to estimate the mean number of times that a given environmental threshold of interest is surpassed by a given pollutant. Seven different rate functions for the Poisson p...We consider some non-homogeneous Poisson models to estimate the mean number of times that a given environmental threshold of interest is surpassed by a given pollutant. Seven different rate functions for the Poisson processes describing the models are taken into account. The rate functions considered are the Weibull, exponentiated-Weibull, and their generalisation the Beta-Weibull rate function. We also use the Musa-Okumoto, the Goel-Okumoto, a generalised Goel- Okumoto and the Weibull-geometric rate functions. Whenever thought justifiable, the model allowing the presence of change-points is also going to be considered. The different models are applied to the daily maximum ozone measurements data provided by the monitoring network of the Metropolitan Area of Mexico City. The aim is to compare the adjustment of different rate functions to the data. Even though, some of the rate functions have been considered before, now we are applying them to the same data set. In previous works they were used in different data sets and therefore a comparison of the adequacy of those models were not possible. The measurements considered here were obtained after a series of environmental measures were implemented in Mexico City. Hence, the data present a different behaviour from that of earlier studies.展开更多
The PPNH (non-homogenous Poisson processes) are frequently used as models for events that come about randomly in a given time period, for example, failure times, time of accidents occurrences, etc. In this work, PPN...The PPNH (non-homogenous Poisson processes) are frequently used as models for events that come about randomly in a given time period, for example, failure times, time of accidents occurrences, etc. In this work, PPNH is used to model monthly maximum observations of urban ozone corresponding to a period of five years from the meteorological stations of Merced, Pedregal and Plateros, located in the metropolitan area of Mexico City. The interest data are the times in which the observations surpassed the permissible level of ozone of 0.11 ppm, settled by the Mexican Official Norm (NOM-020-SSA 1-1993) to preserve public health.展开更多
Aiming at the solving problem of improved nonhomogeneous Poisson process( NHPP) model in engineering application,the immune clone maximum likelihood estimation( MLE)method for solving model parameters was proposed. Th...Aiming at the solving problem of improved nonhomogeneous Poisson process( NHPP) model in engineering application,the immune clone maximum likelihood estimation( MLE)method for solving model parameters was proposed. The minimum negative log-likelihood function was used as the objective function to optimize instead of using iterative method to solve complex system of equations,and the problem of parameter estimation of improved NHPP model was solved by immune clone algorithm. And the interval estimation of reliability indices was given by using fisher information matrix method and delta method. An example of failure truncated data from multiple numerical control( NC) machine tools was taken to prove the method. and the results show that the algorithm has a higher convergence rate and computational accuracy, which demonstrates the feasibility of the method.展开更多
New armament systems are subjected to the method for dealing with multi-stage system reliability-growth statistical problems of diverse population in order to improve reliability before starting mass production. Aimin...New armament systems are subjected to the method for dealing with multi-stage system reliability-growth statistical problems of diverse population in order to improve reliability before starting mass production. Aiming at the test process which is high expense and small sample-size in the development of complex system, the specific methods are studied on how to process the statistical information of Bayesian reliability growth regarding diverse populations. Firstly, according to the characteristics of reliability growth during product development, the Bayesian method is used to integrate the testing information of multi-stage and the order relations of distribution parameters. And then a Gamma-Beta prior distribution is proposed based on non-homogeneous Poisson process(NHPP) corresponding to the reliability growth process. The posterior distribution of reliability parameters is obtained regarding different stages of product, and the reliability parameters are evaluated based on the posterior distribution. Finally, Bayesian approach proposed in this paper for multi-stage reliability growth test is applied to the test process which is small sample-size in the astronautics filed. The results of a numerical example show that the presented model can make use of the diverse information synthetically, and pave the way for the application of the Bayesian model for multi-stage reliability growth test evaluation with small sample-size. The method is useful for evaluating multi-stage system reliability and making reliability growth plan rationally.展开更多
The Goel-Okumoto software reliability model, also known as the Exponential Nonhomogeneous Poisson Process,is one of the earliest software reliability models to be proposed. From literature, it is evident that most of ...The Goel-Okumoto software reliability model, also known as the Exponential Nonhomogeneous Poisson Process,is one of the earliest software reliability models to be proposed. From literature, it is evident that most of the study that has been done on the Goel-Okumoto software reliability model is parameter estimation using the MLE method and model fit. It is widely known that predictive analysis is very useful for modifying, debugging and determining when to terminate software development testing process. However, there is a conspicuous absence of literature on both the classical and Bayesian predictive analyses on the model. This paper presents some results about predictive analyses for the Goel-Okumoto software reliability model. Driven by the requirement of highly reliable software used in computers embedded in automotive, mechanical and safety control systems, industrial and quality process control, real-time sensor networks, aircrafts, nuclear reactors among others, we address four issues in single-sample prediction associated closely with software development process. We have adopted Bayesian methods based on non-informative priors to develop explicit solutions to these problems. An example with real data in the form of time between software failures will be used to illustrate the developed methodologies.展开更多
Deep learning(DL)has proven to be important for computed tomography(CT)image denoising.However,such models are usually trained under supervision,requiring paired data that may be difficult to obtain in practice.Diffus...Deep learning(DL)has proven to be important for computed tomography(CT)image denoising.However,such models are usually trained under supervision,requiring paired data that may be difficult to obtain in practice.Diffusion models offer unsupervised means of solving a wide range of inverse problems via posterior sampling.In particular,using the estimated unconditional score function of the prior distribution,obtained via unsupervised learning,one can sample from the desired posterior via hijacking and regularization.However,due to the iterative solvers used,the number of function evaluations(NFE)required may be orders of magnitudes larger than for single-step samplers.In this paper,we present a novel image denoising technique for photon-counting CT by extending the unsupervised approach to inverse problem solving to the case of Poisson flow generative models(PFGM)++.By hijacking and regularizing the sampling process we obtain a single-step sampler,that is NFE=1.Our proposed method incorporates posterior sampling using diffusion models as a special case.We demonstrate that the added robustness afforded by the PFGM++framework yields significant performance gains.Our results indicate competitive performance compared to popular supervised,including state-of-the-art diffusion-style models with NFE=1(consistency models),unsupervised,and non-DL-based image denoising techniques,on clinical low-dose CT data and clinical images from a prototype photon-counting CT system developed by GE HealthCare.展开更多
The Goel-Okumoto software reliability model is one of the earliest attempts to use a non-homogeneous Poisson process to model failure times observed during software test interval. The model is known as exponential NHP...The Goel-Okumoto software reliability model is one of the earliest attempts to use a non-homogeneous Poisson process to model failure times observed during software test interval. The model is known as exponential NHPP model as it describes exponential software failure curve. Parameter estimation, model fit and predictive analyses based on one sample have been conducted on the Goel-Okumoto software reliability model. However, predictive analyses based on two samples have not been conducted on the model. In two-sample prediction, the parameters and characteristics of the first sample are used to analyze and to make predictions for the second sample. This helps in saving time and resources during the software development process. This paper presents some results about predictive analyses for the Goel-Okumoto software reliability model based on two samples. We have addressed three issues in two-sample prediction associated closely with software development testing process. Bayesian methods based on non-informative priors have been adopted to develop solutions to these issues. The developed methodologies have been illustrated by two sets of software failure data simulated from the Goel-Okumoto software reliability model.展开更多
To enhance the computational efficiency of spatio-temporally discretized phase-field models,we present a high-speed solver specifically designed for the Poisson equations,a component frequently used in the numerical c...To enhance the computational efficiency of spatio-temporally discretized phase-field models,we present a high-speed solver specifically designed for the Poisson equations,a component frequently used in the numerical computation of such models.This efficient solver employs algorithms based on discrete cosine transformations(DCT)or discrete sine transformations(DST)and is not restricted by any spatio-temporal schemes.Our proposed methodology is appropriate for a variety of phase-field models and is especially efficient when combined with flow field systems.Meanwhile,this study has conducted an extensive numerical comparison and found that employing DCT and DST techniques not only yields results comparable to those obtained via the Multigrid(MG)method,a conventional approach used in the resolution of the Poisson equations,but also enhances computational efficiency by over 90%.展开更多
A drought is when reduced rainfall leads to a water crisis,impacting daily life.Over recent decades,droughts have affected various regions,including South Sulawesi,Indonesia.This study aims to map the probability of m...A drought is when reduced rainfall leads to a water crisis,impacting daily life.Over recent decades,droughts have affected various regions,including South Sulawesi,Indonesia.This study aims to map the probability of meteo-rological drought months using the 1-month Standardized Precipitation Index(SPI)in South Sulawesi.Based on SPI,meteorological drought characteristics are inversely proportional to drought event intensity,which can be modeled using a Non-Homogeneous Poisson Process,specifically the Power Law Process.The estimation method employs Maximum Likelihood Estimation(MLE),where drought event intensities are treated as random variables over a set time interval.Future drought months are estimated using the cumulative Power Law Process function,with theβandγparameters more significant than 0.The probability of drought months is determined using the Non-Homogeneous Poisson Process,which models event occurrence over time,considering varying intensities.The results indicate that,of the 24 districts/cities in South Sulawesi,14 experienced meteorological drought based on the SPI and Power Law Process model.The estimated number of months of drought occurrence in the next 12 months is one month of drought with an occurrence probability value of 0.37 occurring in November in the Selayar,Bulukumba,Bantaeng,Jeneponto,Takalar and Gowa areas,in October in the Sinjai,Barru,Bone,Soppeng,Pinrang and Pare-pare areas,as well as in December in the Maros and Makassar areas.展开更多
This article discusses the Bayesian approach for count data using non-homogeneous Poisson processes, considering different prior distributions for the model parameters. A Bayesian approach using Markov Chain Monte Car...This article discusses the Bayesian approach for count data using non-homogeneous Poisson processes, considering different prior distributions for the model parameters. A Bayesian approach using Markov Chain Monte Carlo (MCMC) simulation methods for this model was first introduced by [1], taking into account software reliability data and considering non-informative prior distributions for the parameters of the model. With the non-informative prior distributions presented by these authors, computational difficulties may occur when using MCMC methods. This article considers different prior distributions for the parameters of the proposed model, and studies the effect of such prior distributions on the convergence and accuracy of the results. In order to illustrate the proposed methodology, two examples are considered: the first one has simulated data, and the second has a set of data for pollution issues at a region in Mexico City.展开更多
This paper aims to study a new grey prediction approach and its solution for forecasting the main system variable whose accurate value could not be collected while the potential value set could be defined. Based on th...This paper aims to study a new grey prediction approach and its solution for forecasting the main system variable whose accurate value could not be collected while the potential value set could be defined. Based on the traditional nonhomogenous discrete grey forecasting model(NDGM), the interval grey number and its algebra operations are redefined and combined with the NDGM model to construct a new interval grey number sequence prediction approach. The solving principle of the model is analyzed, the new accuracy evaluation indices, i.e. mean absolute percentage error of mean value sequence(MAPEM) and mean percent of interval sequence simulating value set covered(MPSVSC), are defined and, the procedure of the interval grey number sequence based the NDGM(IG-NDGM) is given out. Finally, a numerical case is used to test the modelling accuracy of the proposed model. Results show that the proposed approach could solve the interval grey number sequence prediction problem and it is much better than the traditional DGM(1,1) model and GM(1,1) model.展开更多
The application of Tikhonov regularization method dealing with the ill-conditioned problems in the regional gravity field modeling by Poisson wavelets is studied. In particular, the choices of the regularization matri...The application of Tikhonov regularization method dealing with the ill-conditioned problems in the regional gravity field modeling by Poisson wavelets is studied. In particular, the choices of the regularization matrices as well as the approaches for estimating the regularization parameters are investigated in details. The numerical results show that the regularized solutions derived from the first-order regularization are better than the ones obtained from zero-order regularization. For cross validation, the optimal regularization parameters are estimated from L-curve, variance component estimation(VCE) and minimum standard deviation(MSTD) approach, respectively, and the results show that the derived regularization parameters from different methods are consistent with each other. Together with the firstorder Tikhonov regularization and VCE method, the optimal network of Poisson wavelets is derived, based on which the local gravimetric geoid is computed. The accuracy of the corresponding gravimetric geoid reaches 1.1 cm in Netherlands, which validates the reliability of using Tikhonov regularization method in tackling the ill-conditioned problem for regional gravity field modeling.展开更多
Introduction: Studies have shown Emergency Department (ED) crowding contributes to reduced quality of patient care, delays in starting treatments, and increased number of patients leaving without being seen. This anal...Introduction: Studies have shown Emergency Department (ED) crowding contributes to reduced quality of patient care, delays in starting treatments, and increased number of patients leaving without being seen. This analysis shows how to theoretically and optimally align staffing to demand. Methods: The ED value stream was identified and mapped. Patients were stratified into three resource-driven care flow cells based on the severity indices. Time observations were conducted for each of the key care team members and the manual cycle times and service rate were calculated and stratified by severity indices. Using X32 Healthcare’s Online Staffing Optimization (OSO) tool, staffing inefficiencies were identified and an optimal schedule was created for each provider group. Results: Lower Severity Indices (higher acuity patient) led to longer times for providers, nurses, patient care assistants, and clerks. The patient length of stay varied from under one hour to over five hours. The flow of patients varied considerably over the 24 hours’ period but was similar by day of the week. Using flow data, we showed that we needed more nurses, more care team members during peak times of patient flow. Eight hour shifts would allow better flexibility. We showed that the additional salary hours added to the budget would be made up for by increased revenue recognized by decreasing the number of patients who leave without being seen. Conclusion: If implemented, these changes will improve ED flow by using lean tools and principles, ultimately leading to timeliness of care, reduced waits, and improved patient experience.展开更多
In this paper, we consider a compound Poisson risk model with taxes paid according to a loss-carry-forward system and dividends paid under a threshold strategy. First, the closed-form expression of the probability fun...In this paper, we consider a compound Poisson risk model with taxes paid according to a loss-carry-forward system and dividends paid under a threshold strategy. First, the closed-form expression of the probability function for the total number of taxation periods over the lifetime of the surplus process is derived. Second, analytical expression of the expected accumulated discounted dividends paid between two consecutive taxation periods is provided. In addition, explicit expressions are also given for the exponential individual claims.展开更多
In this note we study the optimal dividend problem for a company whose surplus process, in the absence of dividend payments, evolves as a generalized compound Poisson model in which the counting process is a generaliz...In this note we study the optimal dividend problem for a company whose surplus process, in the absence of dividend payments, evolves as a generalized compound Poisson model in which the counting process is a generalized Poisson process. This model includes the classical risk model and the Pólya-Aeppli risk model as special cases. The objective is to find a dividend policy so as to maximize the expected discounted value of dividends which are paid to the shareholders until the company is ruined. We show that under some conditions the optimal dividend strategy is formed by a barrier strategy. Moreover, two conjectures are proposed.展开更多
In recent years,there is a scenario in urban tunnel constructions to build super-large-span tunnels for traffic diversion and route optimization purposes.However,the increased size makes tunnel support more difficult....In recent years,there is a scenario in urban tunnel constructions to build super-large-span tunnels for traffic diversion and route optimization purposes.However,the increased size makes tunnel support more difficult.Unfortunately,there are few studies on the failure and support mechanism of the surrounding rocks in the excavation of supported tunnel,while most model tests of super-large-span tunnels focus on the failure characteristics of surrounding rocks in tunnel excavation without supports.Based on excavation compensation method(ECM),model tests of a super-large-span tunnel excavation by different anchor cable support methods in the initial support stage were carried out.The results indicate that during excavation of super-large-span tunnel,the stress and displacement of the shallow surrounding rocks decrease,following a step-shape pattern,and the tunnel failure is mainly concentrated on the vault and spandrel areas.Compared with conventional anchor cable supports,the NPR(negative Poisson’s ratio)anchor cable support is more suitable for the initial support stage of the super-large-span tunnels.The tunnel support theory,model test materials,methods,and the results obtained in this study could provide references for study of similar super-large-span tunnels。展开更多
The degradation process modeling is one of research hotspots of prognostic and health management(PHM),which can be used to estimate system reliability and remaining useful life(RUL).In order to study system degradatio...The degradation process modeling is one of research hotspots of prognostic and health management(PHM),which can be used to estimate system reliability and remaining useful life(RUL).In order to study system degradation process,cumulative damage model is used for degradation modeling.Assuming that damage increment is Gamma distribution,shock counting subjects to a homogeneous Poisson process(HPP)when degradation process is linear,and shock counting is a non-homogeneous Poisson process(NHPP)when degradation process is nonlinear.A two-stage degradation system is considered in this paper,for which the degradation process is linear in the first stage and the degradation process is nonlinear in the second stage.A nonlinear modeling method for considered system is put forward,and reliability model and remaining useful life model are established.A case study is given to validate the veracities of established models.展开更多
文摘In this work, some non-homogeneous Poisson models are considered to study the behaviour of ozone in the city of Puebla, Mexico. Several functions are used as the rate function for the non-homogeneous Poisson process. In addition to their dependence on time, these rate functions also depend on some parameters that need to be estimated. In order to estimate them, a Bayesian approach will be taken. The expressions for the distributions of the parameters involved in the models are very complex. Therefore, Markov chain Monte Carlo algorithms are used to estimate them. The methodology is applied to the ozone data from the city of Puebla, Mexico.
基金financially supported by the project PAPIIT number IN104110-3 of the Direccion General de Apoyo al Personal Academico of the Universidad Nacional Autonoma de Mexico,Mexico,and is part of JMB’s Ph.D.partially funded by the Consejo Nacional de Ciencias y Tecnologia,Mexico,through the Ph.D.Scholarship number 210347JAA was partially funded by the Conselho Nacional de Pesquisa,Brazil,grant number 300235/2005-4.
文摘We consider some non-homogeneous Poisson models to estimate the mean number of times that a given environmental threshold of interest is surpassed by a given pollutant. Seven different rate functions for the Poisson processes describing the models are taken into account. The rate functions considered are the Weibull, exponentiated-Weibull, and their generalisation the Beta-Weibull rate function. We also use the Musa-Okumoto, the Goel-Okumoto, a generalised Goel- Okumoto and the Weibull-geometric rate functions. Whenever thought justifiable, the model allowing the presence of change-points is also going to be considered. The different models are applied to the daily maximum ozone measurements data provided by the monitoring network of the Metropolitan Area of Mexico City. The aim is to compare the adjustment of different rate functions to the data. Even though, some of the rate functions have been considered before, now we are applying them to the same data set. In previous works they were used in different data sets and therefore a comparison of the adequacy of those models were not possible. The measurements considered here were obtained after a series of environmental measures were implemented in Mexico City. Hence, the data present a different behaviour from that of earlier studies.
文摘The PPNH (non-homogenous Poisson processes) are frequently used as models for events that come about randomly in a given time period, for example, failure times, time of accidents occurrences, etc. In this work, PPNH is used to model monthly maximum observations of urban ozone corresponding to a period of five years from the meteorological stations of Merced, Pedregal and Plateros, located in the metropolitan area of Mexico City. The interest data are the times in which the observations surpassed the permissible level of ozone of 0.11 ppm, settled by the Mexican Official Norm (NOM-020-SSA 1-1993) to preserve public health.
基金National CNC Special Project,China(No.2010ZX04001-032)the Youth Science and Technology Foundation of Gansu Province,China(No.145RJYA307)
文摘Aiming at the solving problem of improved nonhomogeneous Poisson process( NHPP) model in engineering application,the immune clone maximum likelihood estimation( MLE)method for solving model parameters was proposed. The minimum negative log-likelihood function was used as the objective function to optimize instead of using iterative method to solve complex system of equations,and the problem of parameter estimation of improved NHPP model was solved by immune clone algorithm. And the interval estimation of reliability indices was given by using fisher information matrix method and delta method. An example of failure truncated data from multiple numerical control( NC) machine tools was taken to prove the method. and the results show that the algorithm has a higher convergence rate and computational accuracy, which demonstrates the feasibility of the method.
基金supported by Sustentation Program of National Ministries and Commissions of China (Grant No. 51319030302 and Grant No. 9140A19030506KG0166)
文摘New armament systems are subjected to the method for dealing with multi-stage system reliability-growth statistical problems of diverse population in order to improve reliability before starting mass production. Aiming at the test process which is high expense and small sample-size in the development of complex system, the specific methods are studied on how to process the statistical information of Bayesian reliability growth regarding diverse populations. Firstly, according to the characteristics of reliability growth during product development, the Bayesian method is used to integrate the testing information of multi-stage and the order relations of distribution parameters. And then a Gamma-Beta prior distribution is proposed based on non-homogeneous Poisson process(NHPP) corresponding to the reliability growth process. The posterior distribution of reliability parameters is obtained regarding different stages of product, and the reliability parameters are evaluated based on the posterior distribution. Finally, Bayesian approach proposed in this paper for multi-stage reliability growth test is applied to the test process which is small sample-size in the astronautics filed. The results of a numerical example show that the presented model can make use of the diverse information synthetically, and pave the way for the application of the Bayesian model for multi-stage reliability growth test evaluation with small sample-size. The method is useful for evaluating multi-stage system reliability and making reliability growth plan rationally.
文摘The Goel-Okumoto software reliability model, also known as the Exponential Nonhomogeneous Poisson Process,is one of the earliest software reliability models to be proposed. From literature, it is evident that most of the study that has been done on the Goel-Okumoto software reliability model is parameter estimation using the MLE method and model fit. It is widely known that predictive analysis is very useful for modifying, debugging and determining when to terminate software development testing process. However, there is a conspicuous absence of literature on both the classical and Bayesian predictive analyses on the model. This paper presents some results about predictive analyses for the Goel-Okumoto software reliability model. Driven by the requirement of highly reliable software used in computers embedded in automotive, mechanical and safety control systems, industrial and quality process control, real-time sensor networks, aircrafts, nuclear reactors among others, we address four issues in single-sample prediction associated closely with software development process. We have adopted Bayesian methods based on non-informative priors to develop explicit solutions to these problems. An example with real data in the form of time between software failures will be used to illustrate the developed methodologies.
基金supported by MedTechLabs,GE HealthCare,the Swedish Research council,No.2021-05103the Göran Gustafsson foundation,No.2114.
文摘Deep learning(DL)has proven to be important for computed tomography(CT)image denoising.However,such models are usually trained under supervision,requiring paired data that may be difficult to obtain in practice.Diffusion models offer unsupervised means of solving a wide range of inverse problems via posterior sampling.In particular,using the estimated unconditional score function of the prior distribution,obtained via unsupervised learning,one can sample from the desired posterior via hijacking and regularization.However,due to the iterative solvers used,the number of function evaluations(NFE)required may be orders of magnitudes larger than for single-step samplers.In this paper,we present a novel image denoising technique for photon-counting CT by extending the unsupervised approach to inverse problem solving to the case of Poisson flow generative models(PFGM)++.By hijacking and regularizing the sampling process we obtain a single-step sampler,that is NFE=1.Our proposed method incorporates posterior sampling using diffusion models as a special case.We demonstrate that the added robustness afforded by the PFGM++framework yields significant performance gains.Our results indicate competitive performance compared to popular supervised,including state-of-the-art diffusion-style models with NFE=1(consistency models),unsupervised,and non-DL-based image denoising techniques,on clinical low-dose CT data and clinical images from a prototype photon-counting CT system developed by GE HealthCare.
文摘The Goel-Okumoto software reliability model is one of the earliest attempts to use a non-homogeneous Poisson process to model failure times observed during software test interval. The model is known as exponential NHPP model as it describes exponential software failure curve. Parameter estimation, model fit and predictive analyses based on one sample have been conducted on the Goel-Okumoto software reliability model. However, predictive analyses based on two samples have not been conducted on the model. In two-sample prediction, the parameters and characteristics of the first sample are used to analyze and to make predictions for the second sample. This helps in saving time and resources during the software development process. This paper presents some results about predictive analyses for the Goel-Okumoto software reliability model based on two samples. We have addressed three issues in two-sample prediction associated closely with software development testing process. Bayesian methods based on non-informative priors have been adopted to develop solutions to these issues. The developed methodologies have been illustrated by two sets of software failure data simulated from the Goel-Okumoto software reliability model.
基金Supported by Shanxi Province Natural Science Research(202203021212249)Special/Youth Foundation of Taiyuan University of Technology(2022QN101)+3 种基金National Natural Science Foundation of China(12301556)Research Project Supported by Shanxi Scholarship Council of China(2021-029)International Cooperation Base and Platform Project of Shanxi Province(202104041101019)Basic Research Plan of Shanxi Province(202203021211129)。
文摘To enhance the computational efficiency of spatio-temporally discretized phase-field models,we present a high-speed solver specifically designed for the Poisson equations,a component frequently used in the numerical computation of such models.This efficient solver employs algorithms based on discrete cosine transformations(DCT)or discrete sine transformations(DST)and is not restricted by any spatio-temporal schemes.Our proposed methodology is appropriate for a variety of phase-field models and is especially efficient when combined with flow field systems.Meanwhile,this study has conducted an extensive numerical comparison and found that employing DCT and DST techniques not only yields results comparable to those obtained via the Multigrid(MG)method,a conventional approach used in the resolution of the Poisson equations,but also enhances computational efficiency by over 90%.
基金funded by Hasanuddin University,grant number 00309/UN4.22/PT.01.03/2024.
文摘A drought is when reduced rainfall leads to a water crisis,impacting daily life.Over recent decades,droughts have affected various regions,including South Sulawesi,Indonesia.This study aims to map the probability of meteo-rological drought months using the 1-month Standardized Precipitation Index(SPI)in South Sulawesi.Based on SPI,meteorological drought characteristics are inversely proportional to drought event intensity,which can be modeled using a Non-Homogeneous Poisson Process,specifically the Power Law Process.The estimation method employs Maximum Likelihood Estimation(MLE),where drought event intensities are treated as random variables over a set time interval.Future drought months are estimated using the cumulative Power Law Process function,with theβandγparameters more significant than 0.The probability of drought months is determined using the Non-Homogeneous Poisson Process,which models event occurrence over time,considering varying intensities.The results indicate that,of the 24 districts/cities in South Sulawesi,14 experienced meteorological drought based on the SPI and Power Law Process model.The estimated number of months of drought occurrence in the next 12 months is one month of drought with an occurrence probability value of 0.37 occurring in November in the Selayar,Bulukumba,Bantaeng,Jeneponto,Takalar and Gowa areas,in October in the Sinjai,Barru,Bone,Soppeng,Pinrang and Pare-pare areas,as well as in December in the Maros and Makassar areas.
基金partially supported by grants from Capes,CNPq and FAPESP.
文摘This article discusses the Bayesian approach for count data using non-homogeneous Poisson processes, considering different prior distributions for the model parameters. A Bayesian approach using Markov Chain Monte Carlo (MCMC) simulation methods for this model was first introduced by [1], taking into account software reliability data and considering non-informative prior distributions for the parameters of the model. With the non-informative prior distributions presented by these authors, computational difficulties may occur when using MCMC methods. This article considers different prior distributions for the parameters of the proposed model, and studies the effect of such prior distributions on the convergence and accuracy of the results. In order to illustrate the proposed methodology, two examples are considered: the first one has simulated data, and the second has a set of data for pollution issues at a region in Mexico City.
基金supported by the National Natural Science Foundation of China(7090104171171113)the Aeronautical Science Foundation of China(2014ZG52077)
文摘This paper aims to study a new grey prediction approach and its solution for forecasting the main system variable whose accurate value could not be collected while the potential value set could be defined. Based on the traditional nonhomogenous discrete grey forecasting model(NDGM), the interval grey number and its algebra operations are redefined and combined with the NDGM model to construct a new interval grey number sequence prediction approach. The solving principle of the model is analyzed, the new accuracy evaluation indices, i.e. mean absolute percentage error of mean value sequence(MAPEM) and mean percent of interval sequence simulating value set covered(MPSVSC), are defined and, the procedure of the interval grey number sequence based the NDGM(IG-NDGM) is given out. Finally, a numerical case is used to test the modelling accuracy of the proposed model. Results show that the proposed approach could solve the interval grey number sequence prediction problem and it is much better than the traditional DGM(1,1) model and GM(1,1) model.
基金supported by the National Natural Science Foundation of China (Nos.41374023,41131067,41474019)the National 973 Project of China (No.2013CB733302)+2 种基金the China Postdoctoral Science Foundation (No.2016M602301)the Key Laboratory of Geospace Envi-ronment and Geodesy,Ministry of Education,Wuhan University (No.15-02-08)the State Scholarship Fund from Chinese Scholarship Council (No.201306270014)
文摘The application of Tikhonov regularization method dealing with the ill-conditioned problems in the regional gravity field modeling by Poisson wavelets is studied. In particular, the choices of the regularization matrices as well as the approaches for estimating the regularization parameters are investigated in details. The numerical results show that the regularized solutions derived from the first-order regularization are better than the ones obtained from zero-order regularization. For cross validation, the optimal regularization parameters are estimated from L-curve, variance component estimation(VCE) and minimum standard deviation(MSTD) approach, respectively, and the results show that the derived regularization parameters from different methods are consistent with each other. Together with the firstorder Tikhonov regularization and VCE method, the optimal network of Poisson wavelets is derived, based on which the local gravimetric geoid is computed. The accuracy of the corresponding gravimetric geoid reaches 1.1 cm in Netherlands, which validates the reliability of using Tikhonov regularization method in tackling the ill-conditioned problem for regional gravity field modeling.
文摘Introduction: Studies have shown Emergency Department (ED) crowding contributes to reduced quality of patient care, delays in starting treatments, and increased number of patients leaving without being seen. This analysis shows how to theoretically and optimally align staffing to demand. Methods: The ED value stream was identified and mapped. Patients were stratified into three resource-driven care flow cells based on the severity indices. Time observations were conducted for each of the key care team members and the manual cycle times and service rate were calculated and stratified by severity indices. Using X32 Healthcare’s Online Staffing Optimization (OSO) tool, staffing inefficiencies were identified and an optimal schedule was created for each provider group. Results: Lower Severity Indices (higher acuity patient) led to longer times for providers, nurses, patient care assistants, and clerks. The patient length of stay varied from under one hour to over five hours. The flow of patients varied considerably over the 24 hours’ period but was similar by day of the week. Using flow data, we showed that we needed more nurses, more care team members during peak times of patient flow. Eight hour shifts would allow better flexibility. We showed that the additional salary hours added to the budget would be made up for by increased revenue recognized by decreasing the number of patients who leave without being seen. Conclusion: If implemented, these changes will improve ED flow by using lean tools and principles, ultimately leading to timeliness of care, reduced waits, and improved patient experience.
基金Supported in part by the National Natural Science Foundation of China, the Guangdong Natural Science Foundation (S2011010004511)the Fundamental Research Funds for the Central Universities of China (201120102020005)
文摘In this paper, we consider a compound Poisson risk model with taxes paid according to a loss-carry-forward system and dividends paid under a threshold strategy. First, the closed-form expression of the probability function for the total number of taxation periods over the lifetime of the surplus process is derived. Second, analytical expression of the expected accumulated discounted dividends paid between two consecutive taxation periods is provided. In addition, explicit expressions are also given for the exponential individual claims.
文摘In this note we study the optimal dividend problem for a company whose surplus process, in the absence of dividend payments, evolves as a generalized compound Poisson model in which the counting process is a generalized Poisson process. This model includes the classical risk model and the Pólya-Aeppli risk model as special cases. The objective is to find a dividend policy so as to maximize the expected discounted value of dividends which are paid to the shareholders until the company is ruined. We show that under some conditions the optimal dividend strategy is formed by a barrier strategy. Moreover, two conjectures are proposed.
基金supported by the Innovation Fund Research Project of State Key Laboratory for Geomechanics and Deep Underground Engineering,China University of Mining and Technology(Grant No.SKLGDUEK202201)the Foundation for the Opening of State Key Laboratory for Geomechanics and Deep Underground Engineering,China University of Mining and Technology(Grant No.SKLGDUEK2129)the Open Research Fund of State Key Laboratory of Geomechanics and Geotechnical Engineering,Institute of Rock and Soil Mechanics,Chinese Academy of Sciences(Grant No.Z020007)。
文摘In recent years,there is a scenario in urban tunnel constructions to build super-large-span tunnels for traffic diversion and route optimization purposes.However,the increased size makes tunnel support more difficult.Unfortunately,there are few studies on the failure and support mechanism of the surrounding rocks in the excavation of supported tunnel,while most model tests of super-large-span tunnels focus on the failure characteristics of surrounding rocks in tunnel excavation without supports.Based on excavation compensation method(ECM),model tests of a super-large-span tunnel excavation by different anchor cable support methods in the initial support stage were carried out.The results indicate that during excavation of super-large-span tunnel,the stress and displacement of the shallow surrounding rocks decrease,following a step-shape pattern,and the tunnel failure is mainly concentrated on the vault and spandrel areas.Compared with conventional anchor cable supports,the NPR(negative Poisson’s ratio)anchor cable support is more suitable for the initial support stage of the super-large-span tunnels.The tunnel support theory,model test materials,methods,and the results obtained in this study could provide references for study of similar super-large-span tunnels。
基金National Outstanding Youth Science Fund Project,China(No.71401173)
文摘The degradation process modeling is one of research hotspots of prognostic and health management(PHM),which can be used to estimate system reliability and remaining useful life(RUL).In order to study system degradation process,cumulative damage model is used for degradation modeling.Assuming that damage increment is Gamma distribution,shock counting subjects to a homogeneous Poisson process(HPP)when degradation process is linear,and shock counting is a non-homogeneous Poisson process(NHPP)when degradation process is nonlinear.A two-stage degradation system is considered in this paper,for which the degradation process is linear in the first stage and the degradation process is nonlinear in the second stage.A nonlinear modeling method for considered system is put forward,and reliability model and remaining useful life model are established.A case study is given to validate the veracities of established models.