Experimental study is performed on the probabilistic models for the long fatigue crack growth rates (da/dN) of LZ50 axle steel. An equation for crack growth rate was derived to consider the trend of stress intensity...Experimental study is performed on the probabilistic models for the long fatigue crack growth rates (da/dN) of LZ50 axle steel. An equation for crack growth rate was derived to consider the trend of stress intensity factor range going down to the threshold and the average stress effect. The probabilistic models were presented on the equation. They consist of the probabilistic da/dN-△K relations, the confidence-based da/dN-△K relations, and the probabilistic- and confidence-based da/dN-△K relations. Efforts were made respectively to characterize the effects of probabilistic assessments due to the scattering regularity of test data, the number of sampling, and both of them. These relations can provide wide selections for practice. Analysis on the test data of LZ50 steel indicates that the present models are available and feasible.展开更多
The development of modern engineering components and equipment features large size,intricate shape and long service life,which places greater demands on valid methods for fatigue performance analysis.Achieving a smoot...The development of modern engineering components and equipment features large size,intricate shape and long service life,which places greater demands on valid methods for fatigue performance analysis.Achieving a smooth transformation between small-scale laboratory specimens’fatigue properties and full-scale engineering components’fatigue strength has been a long-term challenge.In this work,two dominant factors impeding the smooth transformation—notch and size effect were experimentally studied,in which fatigue tests on Al 7075-T6511(a very high-strength aviation alloy)notched specimens of different scales were carried out.Fractography analyses identified the evidence of the size effect on notch fatigue damage evolution.Accordingly,the Energy Field Intensity(EFI)initially developed for multiaxial notch fatigue analysis was improved by utilizing the volume ratio of the Effective Damage Zones(EDZs)for size effect correction.In particular,it was extended to a probabilistic model considering the inherent variability of the fatigue phenomenon.The experimental data of Al 7075-T6511 notched specimens and the model-predicted results were compared,indicating the high potential of the proposed approach in fatigue evaluation under combined notch and size effects.展开更多
We present a stochastic trust-region model-based framework in which its radius is related to the probabilistic models.Especially,we propose a specific algorithm termed STRME,in which the trust-region radius depends li...We present a stochastic trust-region model-based framework in which its radius is related to the probabilistic models.Especially,we propose a specific algorithm termed STRME,in which the trust-region radius depends linearly on the gradient used to define the latest model.The complexity results of the STRME method in nonconvex,convex and strongly convex settings are presented,which match those of the existing algorithms based on probabilistic properties.In addition,several numerical experiments are carried out to reveal the benefits of the proposed methods compared to the existing stochastic trust-region methods and other relevant stochastic gradient methods.展开更多
A new probabilistic testability measure is presented to ease test length analyses of random testing and pseudorandom testing.The testability measure given in this paper is oriented to signal conflict of reconvergent f...A new probabilistic testability measure is presented to ease test length analyses of random testing and pseudorandom testing.The testability measure given in this paper is oriented to signal conflict of reconvergent fanouts.Test length analyses in this paper are based on a hard fault set,calculations of which are practicable and simple.Experimental results have been obtained to show the accuracy of this test length analyser in comparison with that of Savir,Chin and McCluskey,and Wunderlich by using a pseudorandom test generator combined with exhaustive fault simulation.展开更多
It is attractive to formulate problems in computer vision and related fields in term of probabilis- tic estimation where the probability models are defined over graphs, such as grammars. The graphical struc- tures, an...It is attractive to formulate problems in computer vision and related fields in term of probabilis- tic estimation where the probability models are defined over graphs, such as grammars. The graphical struc- tures, and the state variables defined over them, give a rich knowledge representation which can describe the complex structures of objects and images. The proba- bility distributions defined over the graphs capture the statistical variability of these structures. These proba- bility models can be learnt from training data with lim- ited amounts of supervision. But learning these models suffers from the difficulty of evaluating the normaliza- tion constant, or partition function, of the probability distributions which can be extremely computationally demanding. This paper shows that by placing bounds on the normalization constant we can obtain compu- rationally tractable approximations. Surprisingly, for certain choices of loss functions, we obtain many of the standard max-margin criteria used in support vector machines (SVMs) and hence we reduce the learning to standard machine learning methods. We show that many machine learning methods can be obtained in this way as approximations to probabilistic methods including multi-class max-margin, ordinal regression, max-margin Markov networks and parsers, multiple- instance learning, and latent SVM. We illustrate this work by computer vision applications including image labeling, object detection and localization, and motion estimation. We speculate that rained by using better bounds better results can be ob- and approximations.展开更多
This paper presents a new technique of unified probabilistic models for facerecognition from only one single example image per person. The unified models, trained on anobtained training set with multiple samples per p...This paper presents a new technique of unified probabilistic models for facerecognition from only one single example image per person. The unified models, trained on anobtained training set with multiple samples per person, are used to recognize facial images fromanother disjoint database with a single sample per person. Variations between facial images aremodeled as two unified probabilistic models: within-class variations and between-class variations.Gaussian Mixture Models are used to approximate the distributions of the two variations and exploita classifier combination method to improve the performance. Extensive experimental results on theORL face database and the authors'' database (the ICT-JDL database) including totally 1,750 facialimages of 350 individuals demonstrate that the proposed technique, compared with traditionaleigenface method and some well-known traditional algorithms, is a significantly more effective androbust approach for face recognition.展开更多
The recent outbreak of COVID-19 has caused millions of deaths worldwide and a huge societal and economic impact in virtually all countries. A large variety of mathematical models to describe the dynamics of COVID-19 t...The recent outbreak of COVID-19 has caused millions of deaths worldwide and a huge societal and economic impact in virtually all countries. A large variety of mathematical models to describe the dynamics of COVID-19 transmission have been reported. Among them, Bayesian probabilistic models of COVID-19 transmission dynamics have been very efficient in the interpretation of early data from the beginning of the pandemic, helping to estimate the impact of non-pharmacological measures in each country, and forecasting the evolution of the pandemic in different potential scenarios. These models use probability distribution curves to describe key dynamic aspects of the transmission, like the probability for every infected person of infecting other individuals, dying or recovering, with parameters obtained from experimental epidemiological data. However, the impact of vaccine-induced immunity, which has been key for controlling the public health emergency caused by the pandemic, has been more challenging to describe in these models, due to the complexity of experimental data. Here we report different probability distribution curves to model the acquisition and decay of immunity after vaccination. We discuss the mathematical background and how these models can be integrated in existing Bayesian probabilistic models to provide a good estimation of the dynamics of COVID-19 transmission during the entire pandemic period.展开更多
Online automatic fault diagnosis in industrial systems is essential for guaranteeing safe, reliable and efficient operations.However, difficulties associated with computational overload, ubiquitous uncertainties and i...Online automatic fault diagnosis in industrial systems is essential for guaranteeing safe, reliable and efficient operations.However, difficulties associated with computational overload, ubiquitous uncertainties and insufficient fault samples hamper the engineering application of intelligent fault diagnosis technology. Geared towards the settlement of these problems, this paper introduces the method of dynamic uncertain causality graph, which is a new attempt to model complex behaviors of real-world systems under uncertainties. The visual representation to causality pathways and self-relied "chaining" inference mechanisms are analyzed. In particular, some solutions are investigated for the diagnostic reasoning algorithm to aim at reducing its computational complexity and improving the robustness to potential losses and imprecisions in observations. To evaluate the effectiveness and performance of this method, experiments are conducted using both synthetic calculation cases and generator faults of a nuclear power plant. The results manifest the high diagnostic accuracy and efficiency, suggesting its practical significance in large-scale industrial applications.展开更多
A novel approach named aligned mixture probabilistic principal component analysis(AMPPCA) is proposed in this study for fault detection of multimode chemical processes. In order to exploit within-mode correlations,the...A novel approach named aligned mixture probabilistic principal component analysis(AMPPCA) is proposed in this study for fault detection of multimode chemical processes. In order to exploit within-mode correlations,the AMPPCA algorithm first estimates a statistical description for each operating mode by applying mixture probabilistic principal component analysis(MPPCA). As a comparison, the combined MPPCA is employed where monitoring results are softly integrated according to posterior probabilities of the test sample in each local model. For exploiting the cross-mode correlations, which may be useful but are inadvertently neglected due to separately held monitoring approaches, a global monitoring model is constructed by aligning all local models together. In this way, both within-mode and cross-mode correlations are preserved in this integrated space. Finally, the utility and feasibility of AMPPCA are demonstrated through a non-isothermal continuous stirred tank reactor and the TE benchmark process.展开更多
Background: With mounting global environmental, social and economic pressures the resilience and stability of forests and thus the provisioning of vital ecosystem services is increasingly threatened. Intensified moni...Background: With mounting global environmental, social and economic pressures the resilience and stability of forests and thus the provisioning of vital ecosystem services is increasingly threatened. Intensified monitoring can help to detect ecological threats and changes earlier, but monitoring resources are limited. Participatory forest monitoring with the help of "citizen scientists" can provide additional resources for forest monitoring and at the same time help to communicate with stakeholders and the general public. Examples for citizen science projects in the forestry domain can be found but a solid, applicable larger framework to utilise public participation in the area of forest monitoring seems to be lacking. We propose that a better understanding of shared and related topics in citizen science and forest monitoring might be a first step towards such a framework. Methods: We conduct a systematic meta-analysis of 1015 publication abstracts addressing "forest monitoring" and "citizen science" in order to explore the combined topical landscape of these subjects. We employ 'topic modelling an unsupervised probabilistic machine learning method, to identify latent shared topics in the analysed publications. Results: We find that large shared topics exist, but that these are primarily topics that would be expected in scientific publications in general. Common domain-specific topics are under-represented and indicate a topical separation of the two document sets on "forest monitoring" and "citizen science" and thus the represented domains. While topic modelling as a method proves to be a scalable and useful analytical tool, we propose that our approach could deliver even more useful data if a larger document set and full-text publications would be available for analysis. Conclusions: We propose that these results, together with the observation of non-shared but related topics, point at under-utilised opportunities for public participation in forest monitoring. Citizen science could be applied as a versatile tool in forest ecosystems monitoring, complementing traditional forest monitoring programmes, assisting early threat recognition and helping to connect forest management with the general public. We conclude that our presented approach should be pursued further as it may aid the understanding and setup of citizen science efforts in the forest monitoring domain.展开更多
A simple probabilistic model for predicting crack growth behavior under random loading is presented. In the model, the parameters c and m in the Paris-Erdogan Equation are taken as random variables, and their stochast...A simple probabilistic model for predicting crack growth behavior under random loading is presented. In the model, the parameters c and m in the Paris-Erdogan Equation are taken as random variables, and their stochastic characteristic values are obtained through fatigue crack propagation tests on an offshore structural steel under constant amplitude loading. Furthermore, by using the Monte Carlo simulation technique, the fatigue crack propagation life to reach a given crack length is predicted. The tests are conducted to verify the applicability of the theoretical prediction of the fatigue crack propagation.展开更多
An algorithm to refine and clean gait silhouette noises generated by imperfect motion detection techniques is developed,and a relatively complete and high quality silhouette is obtained.The silhouettes are sequentiall...An algorithm to refine and clean gait silhouette noises generated by imperfect motion detection techniques is developed,and a relatively complete and high quality silhouette is obtained.The silhouettes are sequentially refined in two levels according to two different probabilistic models.The first level is within-sequence refinement.Each silhouette in a particular sequence is refined by an individual model trained by the gait images from current sequence.The second level is between-sequence refinement.All the silhouettes that need further refinement are modified by a population model trained by the gait images chosen from a certain amount of pedestrians.The intention is to preserve the within-class similarity and to decrease the interaction between one class and others.Comparative experimental results indicate that the proposed algorithm is simple and quite effective,and it helps the existing recognition methods achieve a higher recognition performance.展开更多
Scour has been widely accepted as a key reason for bridge failures.Bridges are susceptible and sensitive to the scour phenomenon,which describes the loss of riverbed sediments around the bridge supports because of flo...Scour has been widely accepted as a key reason for bridge failures.Bridges are susceptible and sensitive to the scour phenomenon,which describes the loss of riverbed sediments around the bridge supports because of flow.The carrying capacity of a deep-water foundation is influenced by the formation of a scour hole,which means that a severe scour can lead to a bridge failure without warning.Most of the current scour predictions are based on deterministic models,while other loads at bridges are usually provided as probabilistic values.To integrate scour factors with other loads in bridge design and research,a quantile regression model was utilized to estimate scour depth.Field data and experimental data from previous studies were collected to build the model.Moreover,scour estimations using the HEC-18 equation and the proposed method were compared.By using the“CCC(Calculate,Confirm,and Check)”procedure,the probabilistic concept could be used to calculate various scour depths with the targeted likelihood according to a specified chance of bridge failure.The study shows that with a sufficiently large and continuously updated database,the proposed model could present reasonable results and provide guidance for scour mitigation.展开更多
The growth and survival characteristic of Salmonella Enteritidis under acidic and osmotic conditions were studied.Meanwhile,a probabilistic model based on the theory of cell division and mortality was established to p...The growth and survival characteristic of Salmonella Enteritidis under acidic and osmotic conditions were studied.Meanwhile,a probabilistic model based on the theory of cell division and mortality was established to predict the growth or inactivation of S.Enteritidis.The experimental results demonstrated that the growth curves of planktonic and detached cells showed a significant difference(p<0.05)under four conditions,including pH5.0+0.0%NaCl,pH7.0+4.0%NaCl,pH6.0+4.0%NaCl,and pH5.0+4.0%NaCl.And the established primary and secondary models could describe the growth of S.enteritis well by estimating four mathematics evaluation indexes,including determination coefficient(R2),root mean square error(RMSE),accuracy factor(Af)and bias factor(Bf).Moreover,sequential treatment of 15%NaCl stress followed by pH 4.5 stress was the best condition to inactivate S.Enteritidis in 10 h at 25◦C.The probabilistic model with Logistical or Weibullian form could also predict the inactivation of S.Enteritidis well,thus realize the unification of predictive model to some extent or generalization of inactivation model.Furthermore,the primary 4-parameter probabilistic model or generalized inactivation model had slightly higher applicability and reliability to describe the growth or inactivation of S.Enteritidis than Baranyi model or exponential inactivation model within the experimental range in this study.展开更多
In the technique of video multi-target tracking,the common particle filter can not deal well with uncertain relations among multiple targets.To solve this problem,many researchers use data association method to reduce...In the technique of video multi-target tracking,the common particle filter can not deal well with uncertain relations among multiple targets.To solve this problem,many researchers use data association method to reduce the multi-target uncertainty.However,the traditional data association method is difficult to track accurately when the target is occluded.To remove the occlusion in the video,combined with the theory of data association,this paper adopts the probabilistic graphical model for multi-target modeling and analysis of the targets relationship in the particle filter framework.Ex-perimental results show that the proposed algorithm can solve the occlusion problem better compared with the traditional algorithm.展开更多
Current univariate approach to predict the probability of well construction time has limited accuracy due to the fact that it ignores key factors affecting the time.In this study,we propose a multivariate probabilisti...Current univariate approach to predict the probability of well construction time has limited accuracy due to the fact that it ignores key factors affecting the time.In this study,we propose a multivariate probabilistic approach to predict the risks of well construction time.It takes advantage of an extended multi-dimensional Bernacchia–Pigolotti kernel density estimation technique and combines probability distributions by means of Monte-Carlo simulations to establish a depth-dependent probabilistic model.This method is applied to predict the durations of drilling phases of 192 wells,most of which are located in the AustraliaAsia region.Despite the challenge of gappy records,our model shows an excellent statistical agreement with the observed data.Our results suggested that the total time is longer than the trouble-free time by at least 4 days,and at most 12 days within the 10%–90% confidence interval.This model allows us to derive the likelihoods of duration for each phase at a certain depth and to generate inputs for training data-driven models,facilitating evaluation and prediction of the risks of an entire drilling operation.展开更多
This article shows the probabilistic modeling of hydrocarbon spills on the surface of the sea, using climatology data of oil spill trajectories yielded by applying the lagrangian model PETROMAR-3D. To achieve this goa...This article shows the probabilistic modeling of hydrocarbon spills on the surface of the sea, using climatology data of oil spill trajectories yielded by applying the lagrangian model PETROMAR-3D. To achieve this goal, several computing and statistical tools were used to develop the probabilistic modeling solution based in the methodology of Guo. Solution was implemented using a databases approach and SQL language. A case study is presented which is based on a hypothetical spill in a location inside the Exclusive Economic Zone of Cuba. Important outputs and products of probabilistic modeling were obtained, which are very useful for decision-makers and operators in charge to face oil spill accidents and prepare contingency plans to minimize its effects. In order to study the relationship between the initial trajectory and the arrival of hydrocarbons spills to the coast, a new approach is introduced as an incoming perspective for modeling. It consists in storage in databases the direction of movement of the oil slick at the first 24 hours. The probabilistic modeling solution presented is of great importance for hazard studies of oil spills in Cuban coastal areas.展开更多
Artificial intelligence and computer vision need methods for 2D (two-dimensional) shape retrieval having discrete set of boundary points. A novel method of MHR (Hurwitz-Radon Matrices) is used in shape modeling. P...Artificial intelligence and computer vision need methods for 2D (two-dimensional) shape retrieval having discrete set of boundary points. A novel method of MHR (Hurwitz-Radon Matrices) is used in shape modeling. Proposed method is based on the family of MHR which possess columns composed of orthogonal vectors. 2D curve is retrieved via different functions as probability distribution functions: sine, cosine, tangent, logarithm, exponent, arcsin, arccos, arctan and power function. Created from the family of N-1 MHR and completed with the identical matrix, system of matrices is orthogonal only for dimensions N = 2, 4 or 8. Orthogonality of columns and rows is very significant for stability and high precision of calculations. MHR method is interpolating the function point by point without using any formula of function. Main features of MHR method are: accuracy of curve reconstruction depending on number of nodes and method of choosing nodes, interpolation of L points of the curve is connected with the computational cost of rank O(L), MHR interpolation is not a linear interpolation.展开更多
New sequencing technologies such as Illumina/Solexa, SOLiD/ABI, and 454/Roche, revolutionized the biological researches. In this context, the SOLiD platform has a particular sequencing type, known as multiplex run, wh...New sequencing technologies such as Illumina/Solexa, SOLiD/ABI, and 454/Roche, revolutionized the biological researches. In this context, the SOLiD platform has a particular sequencing type, known as multiplex run, which enables the sequencing of several samples in a single run. It implies in cost reduction and simplifies the analysis of related samples. Meanwhile, this sequencing type requires an additional filtering step to ensure the reliability of the results. Thus, we propose in this paper a probabilistic model which considers the intrinsic characteristics of each sequencing to characterize multiplex runs and filter low-quality data, increasing the data analysis reliability of multiplex sequencing performed on SOLiD. The results show that the proposed model proves to be satisfactory due to: 1) identification of faults in the sequencing process;2) adaptation and development of new protocols for sample preparation;3) the assignment of a degree of confidence to the data generated;and 4) guiding a filtering process, without discarding useful sequences in an arbitrary manner.展开更多
Renewable energy production and the balance between production and demand have become increasingly crucial in modern power systems,necessitating accurate forecasting.Traditional deterministic methods fail to capture t...Renewable energy production and the balance between production and demand have become increasingly crucial in modern power systems,necessitating accurate forecasting.Traditional deterministic methods fail to capture the inherent uncertainties associated with intermittent renewable sources and fluctuating demand patterns.This paper proposes a novel denoising diffusion method for multivariate time series probabilistic forecasting that explicitly models the interdependencies between variables through graph modeling.Our framework employs a parallel feature extraction module that simultaneously captures temporal dynamics and spatial correlations,enabling improved forecasting accuracy.Through extensive evaluation on two world real-datasets focused on renewable energy and electricity demand,we demonstrate that our approach achieves state-of-the-art performance in probabilistic energy time series forecasting tasks.By explicitly modeling variable interdependencies and incorporating temporal information,our method provides reliable probabilistic forecasts,crucial for effective decision-making and resource allocation in the energy sector.Extensive experiments validate that our proposed method reduces the Continuous Ranked Probability Score(CRPS)by 2.1%-70.9%,Mean Absolute Error(MAE)by 4.4%-52.2%,and Root Mean Squared Error(RMSE)by 7.9%-53.4%over existing methods on two real-world datasets.展开更多
基金国家自然科学基金,Special Foundation of National Excellent Ph.D.Thesis,Outstanding Young Teachers of Ministry of Education of China
文摘Experimental study is performed on the probabilistic models for the long fatigue crack growth rates (da/dN) of LZ50 axle steel. An equation for crack growth rate was derived to consider the trend of stress intensity factor range going down to the threshold and the average stress effect. The probabilistic models were presented on the equation. They consist of the probabilistic da/dN-△K relations, the confidence-based da/dN-△K relations, and the probabilistic- and confidence-based da/dN-△K relations. Efforts were made respectively to characterize the effects of probabilistic assessments due to the scattering regularity of test data, the number of sampling, and both of them. These relations can provide wide selections for practice. Analysis on the test data of LZ50 steel indicates that the present models are available and feasible.
基金support from the Key Program of the National Natural Science Foundation of China(No.12232004)the Training Program of the Sichuan Province Science and the Technology Innovation Seedling Project(No.MZGC20230012)are acknowledged.
文摘The development of modern engineering components and equipment features large size,intricate shape and long service life,which places greater demands on valid methods for fatigue performance analysis.Achieving a smooth transformation between small-scale laboratory specimens’fatigue properties and full-scale engineering components’fatigue strength has been a long-term challenge.In this work,two dominant factors impeding the smooth transformation—notch and size effect were experimentally studied,in which fatigue tests on Al 7075-T6511(a very high-strength aviation alloy)notched specimens of different scales were carried out.Fractography analyses identified the evidence of the size effect on notch fatigue damage evolution.Accordingly,the Energy Field Intensity(EFI)initially developed for multiaxial notch fatigue analysis was improved by utilizing the volume ratio of the Effective Damage Zones(EDZs)for size effect correction.In particular,it was extended to a probabilistic model considering the inherent variability of the fatigue phenomenon.The experimental data of Al 7075-T6511 notched specimens and the model-predicted results were compared,indicating the high potential of the proposed approach in fatigue evaluation under combined notch and size effects.
基金This research is partially supported by the National Natural Science Foundation of China 11331012 and 11688101.
文摘We present a stochastic trust-region model-based framework in which its radius is related to the probabilistic models.Especially,we propose a specific algorithm termed STRME,in which the trust-region radius depends linearly on the gradient used to define the latest model.The complexity results of the STRME method in nonconvex,convex and strongly convex settings are presented,which match those of the existing algorithms based on probabilistic properties.In addition,several numerical experiments are carried out to reveal the benefits of the proposed methods compared to the existing stochastic trust-region methods and other relevant stochastic gradient methods.
文摘A new probabilistic testability measure is presented to ease test length analyses of random testing and pseudorandom testing.The testability measure given in this paper is oriented to signal conflict of reconvergent fanouts.Test length analyses in this paper are based on a hard fault set,calculations of which are practicable and simple.Experimental results have been obtained to show the accuracy of this test length analyser in comparison with that of Savir,Chin and McCluskey,and Wunderlich by using a pseudorandom test generator combined with exhaustive fault simulation.
文摘It is attractive to formulate problems in computer vision and related fields in term of probabilis- tic estimation where the probability models are defined over graphs, such as grammars. The graphical struc- tures, and the state variables defined over them, give a rich knowledge representation which can describe the complex structures of objects and images. The proba- bility distributions defined over the graphs capture the statistical variability of these structures. These proba- bility models can be learnt from training data with lim- ited amounts of supervision. But learning these models suffers from the difficulty of evaluating the normaliza- tion constant, or partition function, of the probability distributions which can be extremely computationally demanding. This paper shows that by placing bounds on the normalization constant we can obtain compu- rationally tractable approximations. Surprisingly, for certain choices of loss functions, we obtain many of the standard max-margin criteria used in support vector machines (SVMs) and hence we reduce the learning to standard machine learning methods. We show that many machine learning methods can be obtained in this way as approximations to probabilistic methods including multi-class max-margin, ordinal regression, max-margin Markov networks and parsers, multiple- instance learning, and latent SVM. We illustrate this work by computer vision applications including image labeling, object detection and localization, and motion estimation. We speculate that rained by using better bounds better results can be ob- and approximations.
文摘This paper presents a new technique of unified probabilistic models for facerecognition from only one single example image per person. The unified models, trained on anobtained training set with multiple samples per person, are used to recognize facial images fromanother disjoint database with a single sample per person. Variations between facial images aremodeled as two unified probabilistic models: within-class variations and between-class variations.Gaussian Mixture Models are used to approximate the distributions of the two variations and exploita classifier combination method to improve the performance. Extensive experimental results on theORL face database and the authors'' database (the ICT-JDL database) including totally 1,750 facialimages of 350 individuals demonstrate that the proposed technique, compared with traditionaleigenface method and some well-known traditional algorithms, is a significantly more effective androbust approach for face recognition.
文摘The recent outbreak of COVID-19 has caused millions of deaths worldwide and a huge societal and economic impact in virtually all countries. A large variety of mathematical models to describe the dynamics of COVID-19 transmission have been reported. Among them, Bayesian probabilistic models of COVID-19 transmission dynamics have been very efficient in the interpretation of early data from the beginning of the pandemic, helping to estimate the impact of non-pharmacological measures in each country, and forecasting the evolution of the pandemic in different potential scenarios. These models use probability distribution curves to describe key dynamic aspects of the transmission, like the probability for every infected person of infecting other individuals, dying or recovering, with parameters obtained from experimental epidemiological data. However, the impact of vaccine-induced immunity, which has been key for controlling the public health emergency caused by the pandemic, has been more challenging to describe in these models, due to the complexity of experimental data. Here we report different probability distribution curves to model the acquisition and decay of immunity after vaccination. We discuss the mathematical background and how these models can be integrated in existing Bayesian probabilistic models to provide a good estimation of the dynamics of COVID-19 transmission during the entire pandemic period.
基金supported by the National Natural Science Foundation of China(Nos.61050005 and 61273330)Research Foundation for the Doctoral Program of China Ministry of Education(No.20120002110037)+1 种基金the 2014 Teaching Reform Project of Shandong Normal UniversityDevelopment Project of China Guangdong Nuclear Power Group(No.CNPRI-ST10P005)
文摘Online automatic fault diagnosis in industrial systems is essential for guaranteeing safe, reliable and efficient operations.However, difficulties associated with computational overload, ubiquitous uncertainties and insufficient fault samples hamper the engineering application of intelligent fault diagnosis technology. Geared towards the settlement of these problems, this paper introduces the method of dynamic uncertain causality graph, which is a new attempt to model complex behaviors of real-world systems under uncertainties. The visual representation to causality pathways and self-relied "chaining" inference mechanisms are analyzed. In particular, some solutions are investigated for the diagnostic reasoning algorithm to aim at reducing its computational complexity and improving the robustness to potential losses and imprecisions in observations. To evaluate the effectiveness and performance of this method, experiments are conducted using both synthetic calculation cases and generator faults of a nuclear power plant. The results manifest the high diagnostic accuracy and efficiency, suggesting its practical significance in large-scale industrial applications.
基金Supported by the National Natural Science Foundation of China(61374140)Shanghai Pujiang Program(12PJ1402200)
文摘A novel approach named aligned mixture probabilistic principal component analysis(AMPPCA) is proposed in this study for fault detection of multimode chemical processes. In order to exploit within-mode correlations,the AMPPCA algorithm first estimates a statistical description for each operating mode by applying mixture probabilistic principal component analysis(MPPCA). As a comparison, the combined MPPCA is employed where monitoring results are softly integrated according to posterior probabilities of the test sample in each local model. For exploiting the cross-mode correlations, which may be useful but are inadvertently neglected due to separately held monitoring approaches, a global monitoring model is constructed by aligning all local models together. In this way, both within-mode and cross-mode correlations are preserved in this integrated space. Finally, the utility and feasibility of AMPPCA are demonstrated through a non-isothermal continuous stirred tank reactor and the TE benchmark process.
文摘Background: With mounting global environmental, social and economic pressures the resilience and stability of forests and thus the provisioning of vital ecosystem services is increasingly threatened. Intensified monitoring can help to detect ecological threats and changes earlier, but monitoring resources are limited. Participatory forest monitoring with the help of "citizen scientists" can provide additional resources for forest monitoring and at the same time help to communicate with stakeholders and the general public. Examples for citizen science projects in the forestry domain can be found but a solid, applicable larger framework to utilise public participation in the area of forest monitoring seems to be lacking. We propose that a better understanding of shared and related topics in citizen science and forest monitoring might be a first step towards such a framework. Methods: We conduct a systematic meta-analysis of 1015 publication abstracts addressing "forest monitoring" and "citizen science" in order to explore the combined topical landscape of these subjects. We employ 'topic modelling an unsupervised probabilistic machine learning method, to identify latent shared topics in the analysed publications. Results: We find that large shared topics exist, but that these are primarily topics that would be expected in scientific publications in general. Common domain-specific topics are under-represented and indicate a topical separation of the two document sets on "forest monitoring" and "citizen science" and thus the represented domains. While topic modelling as a method proves to be a scalable and useful analytical tool, we propose that our approach could deliver even more useful data if a larger document set and full-text publications would be available for analysis. Conclusions: We propose that these results, together with the observation of non-shared but related topics, point at under-utilised opportunities for public participation in forest monitoring. Citizen science could be applied as a versatile tool in forest ecosystems monitoring, complementing traditional forest monitoring programmes, assisting early threat recognition and helping to connect forest management with the general public. We conclude that our presented approach should be pursued further as it may aid the understanding and setup of citizen science efforts in the forest monitoring domain.
文摘A simple probabilistic model for predicting crack growth behavior under random loading is presented. In the model, the parameters c and m in the Paris-Erdogan Equation are taken as random variables, and their stochastic characteristic values are obtained through fatigue crack propagation tests on an offshore structural steel under constant amplitude loading. Furthermore, by using the Monte Carlo simulation technique, the fatigue crack propagation life to reach a given crack length is predicted. The tests are conducted to verify the applicability of the theoretical prediction of the fatigue crack propagation.
基金the National Natural Science Foundation of China (No. 60675024)
文摘An algorithm to refine and clean gait silhouette noises generated by imperfect motion detection techniques is developed,and a relatively complete and high quality silhouette is obtained.The silhouettes are sequentially refined in two levels according to two different probabilistic models.The first level is within-sequence refinement.Each silhouette in a particular sequence is refined by an individual model trained by the gait images from current sequence.The second level is between-sequence refinement.All the silhouettes that need further refinement are modified by a population model trained by the gait images chosen from a certain amount of pedestrians.The intention is to preserve the within-class similarity and to decrease the interaction between one class and others.Comparative experimental results indicate that the proposed algorithm is simple and quite effective,and it helps the existing recognition methods achieve a higher recognition performance.
基金Sponsored by the National Natural Science Foundation of China(Grant Nos.51908421 and 41172246).
文摘Scour has been widely accepted as a key reason for bridge failures.Bridges are susceptible and sensitive to the scour phenomenon,which describes the loss of riverbed sediments around the bridge supports because of flow.The carrying capacity of a deep-water foundation is influenced by the formation of a scour hole,which means that a severe scour can lead to a bridge failure without warning.Most of the current scour predictions are based on deterministic models,while other loads at bridges are usually provided as probabilistic values.To integrate scour factors with other loads in bridge design and research,a quantile regression model was utilized to estimate scour depth.Field data and experimental data from previous studies were collected to build the model.Moreover,scour estimations using the HEC-18 equation and the proposed method were compared.By using the“CCC(Calculate,Confirm,and Check)”procedure,the probabilistic concept could be used to calculate various scour depths with the targeted likelihood according to a specified chance of bridge failure.The study shows that with a sufficiently large and continuously updated database,the proposed model could present reasonable results and provide guidance for scour mitigation.
基金This work has been financially supported by the National Natural Science Foundation of China(NSFC 31271896 and 31371776)the project in the National Science&Technology Pillar Program during the Twelfth Five-year Plan Period(2015BAK36B04)and the project of Science and Technology Commission of Shanghai Municipality(15395810900).
文摘The growth and survival characteristic of Salmonella Enteritidis under acidic and osmotic conditions were studied.Meanwhile,a probabilistic model based on the theory of cell division and mortality was established to predict the growth or inactivation of S.Enteritidis.The experimental results demonstrated that the growth curves of planktonic and detached cells showed a significant difference(p<0.05)under four conditions,including pH5.0+0.0%NaCl,pH7.0+4.0%NaCl,pH6.0+4.0%NaCl,and pH5.0+4.0%NaCl.And the established primary and secondary models could describe the growth of S.enteritis well by estimating four mathematics evaluation indexes,including determination coefficient(R2),root mean square error(RMSE),accuracy factor(Af)and bias factor(Bf).Moreover,sequential treatment of 15%NaCl stress followed by pH 4.5 stress was the best condition to inactivate S.Enteritidis in 10 h at 25◦C.The probabilistic model with Logistical or Weibullian form could also predict the inactivation of S.Enteritidis well,thus realize the unification of predictive model to some extent or generalization of inactivation model.Furthermore,the primary 4-parameter probabilistic model or generalized inactivation model had slightly higher applicability and reliability to describe the growth or inactivation of S.Enteritidis than Baranyi model or exponential inactivation model within the experimental range in this study.
基金Supported by the National High Technology Research and Development Program of China (No. 2007AA11Z227)the Natural Science Foundation of Jiangsu Province of China(No. BK2009352)the Fundamental Research Funds for the Central Universities of China (No. 2010B16414)
文摘In the technique of video multi-target tracking,the common particle filter can not deal well with uncertain relations among multiple targets.To solve this problem,many researchers use data association method to reduce the multi-target uncertainty.However,the traditional data association method is difficult to track accurately when the target is occluded.To remove the occlusion in the video,combined with the theory of data association,this paper adopts the probabilistic graphical model for multi-target modeling and analysis of the targets relationship in the particle filter framework.Ex-perimental results show that the proposed algorithm can solve the occlusion problem better compared with the traditional algorithm.
文摘Current univariate approach to predict the probability of well construction time has limited accuracy due to the fact that it ignores key factors affecting the time.In this study,we propose a multivariate probabilistic approach to predict the risks of well construction time.It takes advantage of an extended multi-dimensional Bernacchia–Pigolotti kernel density estimation technique and combines probability distributions by means of Monte-Carlo simulations to establish a depth-dependent probabilistic model.This method is applied to predict the durations of drilling phases of 192 wells,most of which are located in the AustraliaAsia region.Despite the challenge of gappy records,our model shows an excellent statistical agreement with the observed data.Our results suggested that the total time is longer than the trouble-free time by at least 4 days,and at most 12 days within the 10%–90% confidence interval.This model allows us to derive the likelihoods of duration for each phase at a certain depth and to generate inputs for training data-driven models,facilitating evaluation and prediction of the risks of an entire drilling operation.
文摘This article shows the probabilistic modeling of hydrocarbon spills on the surface of the sea, using climatology data of oil spill trajectories yielded by applying the lagrangian model PETROMAR-3D. To achieve this goal, several computing and statistical tools were used to develop the probabilistic modeling solution based in the methodology of Guo. Solution was implemented using a databases approach and SQL language. A case study is presented which is based on a hypothetical spill in a location inside the Exclusive Economic Zone of Cuba. Important outputs and products of probabilistic modeling were obtained, which are very useful for decision-makers and operators in charge to face oil spill accidents and prepare contingency plans to minimize its effects. In order to study the relationship between the initial trajectory and the arrival of hydrocarbons spills to the coast, a new approach is introduced as an incoming perspective for modeling. It consists in storage in databases the direction of movement of the oil slick at the first 24 hours. The probabilistic modeling solution presented is of great importance for hazard studies of oil spills in Cuban coastal areas.
文摘Artificial intelligence and computer vision need methods for 2D (two-dimensional) shape retrieval having discrete set of boundary points. A novel method of MHR (Hurwitz-Radon Matrices) is used in shape modeling. Proposed method is based on the family of MHR which possess columns composed of orthogonal vectors. 2D curve is retrieved via different functions as probability distribution functions: sine, cosine, tangent, logarithm, exponent, arcsin, arccos, arctan and power function. Created from the family of N-1 MHR and completed with the identical matrix, system of matrices is orthogonal only for dimensions N = 2, 4 or 8. Orthogonality of columns and rows is very significant for stability and high precision of calculations. MHR method is interpolating the function point by point without using any formula of function. Main features of MHR method are: accuracy of curve reconstruction depending on number of nodes and method of choosing nodes, interpolation of L points of the curve is connected with the computational cost of rank O(L), MHR interpolation is not a linear interpolation.
文摘New sequencing technologies such as Illumina/Solexa, SOLiD/ABI, and 454/Roche, revolutionized the biological researches. In this context, the SOLiD platform has a particular sequencing type, known as multiplex run, which enables the sequencing of several samples in a single run. It implies in cost reduction and simplifies the analysis of related samples. Meanwhile, this sequencing type requires an additional filtering step to ensure the reliability of the results. Thus, we propose in this paper a probabilistic model which considers the intrinsic characteristics of each sequencing to characterize multiplex runs and filter low-quality data, increasing the data analysis reliability of multiplex sequencing performed on SOLiD. The results show that the proposed model proves to be satisfactory due to: 1) identification of faults in the sequencing process;2) adaptation and development of new protocols for sample preparation;3) the assignment of a degree of confidence to the data generated;and 4) guiding a filtering process, without discarding useful sequences in an arbitrary manner.
文摘Renewable energy production and the balance between production and demand have become increasingly crucial in modern power systems,necessitating accurate forecasting.Traditional deterministic methods fail to capture the inherent uncertainties associated with intermittent renewable sources and fluctuating demand patterns.This paper proposes a novel denoising diffusion method for multivariate time series probabilistic forecasting that explicitly models the interdependencies between variables through graph modeling.Our framework employs a parallel feature extraction module that simultaneously captures temporal dynamics and spatial correlations,enabling improved forecasting accuracy.Through extensive evaluation on two world real-datasets focused on renewable energy and electricity demand,we demonstrate that our approach achieves state-of-the-art performance in probabilistic energy time series forecasting tasks.By explicitly modeling variable interdependencies and incorporating temporal information,our method provides reliable probabilistic forecasts,crucial for effective decision-making and resource allocation in the energy sector.Extensive experiments validate that our proposed method reduces the Continuous Ranked Probability Score(CRPS)by 2.1%-70.9%,Mean Absolute Error(MAE)by 4.4%-52.2%,and Root Mean Squared Error(RMSE)by 7.9%-53.4%over existing methods on two real-world datasets.