Real-time perception of rock mass information is of great importance to efficient tunneling and hazard prevention in tunnel boring machines(TBMs).In this study,a TBM-rock mutual feedback perception method based on dat...Real-time perception of rock mass information is of great importance to efficient tunneling and hazard prevention in tunnel boring machines(TBMs).In this study,a TBM-rock mutual feedback perception method based on data mining(DM) is proposed,which takes 10 tunneling parameters related to surrounding rock conditions as input features.For implementation,first,the database of TBM tunneling parameters was established,in which 10,807 tunneling cycles from the Songhua River water conveyance tunnel were accommodated.Then,the spectral clustering(SC) algorithm based on graph theory was introduced to cluster the TBM tunneling data.According to the clustering results and rock mass boreability index,the rock mass conditions were classified into four classes,and the reasonable distribution intervals of the main tunneling parameters corresponding to each class were presented.Meanwhile,based on the deep neural network(DNN),the real-time prediction model regarding different rock conditions was established.Finally,the rationality and adaptability of the proposed method were validated via analyzing the tunneling specific energy,feature importance,and training dataset size.The proposed TBM-rock mutual feedback perception method enables the automatic identification of rock mass conditions and the dynamic adjustment of tunneling parameters during TBM driving.Furthermore,in terms of the prediction performance,the method can predict the rock mass conditions ahead of the tunnel face in real time more accurately than the traditional machine learning prediction methods.展开更多
Non-vacuum storage condition has a great impact on the explosion characteristics of aluminum powders. In this paper, vacuum-packed flake and globular aluminum powders stored in a dryer after opening the vacuum package...Non-vacuum storage condition has a great impact on the explosion characteristics of aluminum powders. In this paper, vacuum-packed flake and globular aluminum powders stored in a dryer after opening the vacuum package are selected as the experimental samples, and a 20 L spherical explosion device is chosen to test the minimum explosible concentration (MEC) values of aluminum dusts under different storage time. The results show that the MEC values of two types of unoxidized aluminum powders are 30 g/m^3. The MEC values of flake and globular aluminum powders firstly go up with the increase of storage time in the dryer and then reach the maximum values of 50 g/m^3 and 60 g/m^3 at respective storage time until finally they stabilize gradually. The main reason is that the oxidation rate is faster owing to the bigger specific surface area of globular aluminum powders. Hence, the storage time has more significant effect on the MEC of globular aluminum powder than that of flake aluminum powder. After a period of time, the outer surface is oxidized to generate a layer of film, which prevents the further oxidation of aluminum powder, resulting in the temporary stability of MEC.展开更多
Based on two kinds of proxy data, a tree-ring width chronology at Huashan and the wetness/dryness grade series around Xi'an in north-centralChina, thes presat study demonstrates how different types of proxy climat...Based on two kinds of proxy data, a tree-ring width chronology at Huashan and the wetness/dryness grade series around Xi'an in north-centralChina, thes presat study demonstrates how different types of proxy climaterecords can be combined to give a more reliable estimate of past climate thaneither record can be done individually. With comparison and correction of thetwo data sets, various statistical models can be developed from individual andcombined senes. Among them, the best combined model produced by theconditional quantile adjustmat method can be selected for reconstruction ofApril-July rainfall at Huashan back to 1600 A.D.展开更多
An investigation is carried out on the problem involved in 4D variational data assimilation (VDA) with constraint conditions based on a finite-element shallow-water equation model. In the investigation, the adjoint te...An investigation is carried out on the problem involved in 4D variational data assimilation (VDA) with constraint conditions based on a finite-element shallow-water equation model. In the investigation, the adjoint technology, penalty method and augmented Lagrangian method are used in constraint optimization field to minimize the defined constraint objective functions. The results of the numerical experiments show that the optimal solutions are obtained if the functions reach the minima. VDA with constraint conditions controlling the growth of gravity oscillations is efficient to eliminate perturbation and produces optimal initial field. It seems that this method can also be applied to the problem in numerical weather prediction. Key words Variational data assimilation - Constraint conditions - Penalty methods - finite-element model This research is supported by National Natural Science Foundation of China (Grant No. 49575269) and by National Key Basic Research on the Formation Mechanism and Prediction Theory of Severe Synoptic Disasters (Grant No. G1998040910).展开更多
Material identification is critical for understanding the relationship between mechanical properties and the associated mechanical functions.However,material identification is a challenging task,especially when the ch...Material identification is critical for understanding the relationship between mechanical properties and the associated mechanical functions.However,material identification is a challenging task,especially when the characteristic of the material is highly nonlinear in nature,as is common in biological tissue.In this work,we identify unknown material properties in continuum solid mechanics via physics-informed neural networks(PINNs).To improve the accuracy and efficiency of PINNs,we develop efficient strategies to nonuniformly sample observational data.We also investigate different approaches to enforce Dirichlet-type boundary conditions(BCs)as soft or hard constraints.Finally,we apply the proposed methods to a diverse set of time-dependent and time-independent solid mechanic examples that span linear elastic and hyperelastic material space.The estimated material parameters achieve relative errors of less than 1%.As such,this work is relevant to diverse applications,including optimizing structural integrity and developing novel materials.展开更多
Geophysical techniques can help to bridge the inherent gap that exists with regard to spatial resolution and coverage for classical hydrological methods. This has led to the emergence of a new and rapidly growing rese...Geophysical techniques can help to bridge the inherent gap that exists with regard to spatial resolution and coverage for classical hydrological methods. This has led to the emergence of a new and rapidly growing research domain generally referred to as hydrogeophysics. Given the differing sensitivities of various geophysical techniques to hydrologically relevant parameters, their inherent trade-off between resolution and range, as well as the notoriously site-specific nature of petrophysical parameter relations, the fundamental usefulness of multi-method surveys for reducing uncertainties in data analysis and interpretation is widely accepted. A major challenge arising from such endeavors is the quantitative integration of the resulting vast and diverse database into a unified model of the probed subsurface region that is consistent with all available measurements. To this end, we present a novel approach toward hydrogeophysical data integration based on a Monte-Carlo-type conditional stochastic simulation method that we consider to be particularly suitable for high-resolution local-scale studies. Monte Carlo techniques are flexible and versatile, allowing for accounting for a wide variety of data and constraints of differing resolution and hardness, and thus have the potential of providing, in a geostatistical sense, realistic models of the pertinent target parameter distributions. Compared to more conventional approaches, such as co-kriging or cluster analysis, our approach provides significant ad- vancements in the way that larger-scale structural information eontained in the hydrogeophysieal data can be accounted for. After outlining the methodological background of our algorithm, we present the results of its application to the integration of porosity log and tomographic crosshole georadar data to generate stochastic realizations of the detailed local-scale porosity structure. Our procedure is first tested on pertinent synthetic data and then applied to a field dataset collected at the Boise Hydrogeophysical Research Site. Finally, we compare the performance of our data integration approach to that of more conventional methods with regard to the prediction of flow and transport phenomena in highly heterogeneous media and discuss the implications arising.展开更多
Our purpose is twofold: to present a prototypical example of the conditioning technique to obtain the best estimator of a parameter and to show that th</span><span style="font-family:Verdana;">is...Our purpose is twofold: to present a prototypical example of the conditioning technique to obtain the best estimator of a parameter and to show that th</span><span style="font-family:Verdana;">is technique resides in the structure of an inner product space. Th</span><span style="font-family:Verdana;">e technique uses conditioning </span></span><span style="font-family:Verdana;">of</span><span style="font-family:Verdana;"> an unbiased estimator </span><span style="font-family:Verdana;">on</span><span style="font-family:Verdana;"> a sufficient statistic. This procedure is founded upon the conditional variance formula, which leads to an inner product space and a geometric interpretation. The example clearly illustrates the dependence on the sampling methodology. These advantages show the power and centrality of this process.展开更多
This paper proposes some additional moment conditions for the linear feedback model with explanatory variables being predetermined, which is proposed by [1] for the purpose of dealing with count panel data. The newly ...This paper proposes some additional moment conditions for the linear feedback model with explanatory variables being predetermined, which is proposed by [1] for the purpose of dealing with count panel data. The newly proposed moment conditions include those associated with the equidispersion, the Negbin I-type model and the stationarity. The GMM estimators are constructed incorporating the additional moment conditions. Some Monte Carlo experiments indicate that the GMM estimators incorporating the additional moment conditions perform well, compared to that using only the conventional moment conditions proposed by [2,3].展开更多
A new covariate dependent zero-truncated bivariate Poisson model is proposed in this paper employing generalized linear model. A marginal-conditional approach is used to show the bivariate model. The proposed model wi...A new covariate dependent zero-truncated bivariate Poisson model is proposed in this paper employing generalized linear model. A marginal-conditional approach is used to show the bivariate model. The proposed model with estimation procedure and tests for goodness-of-fit and under (or over) dispersion are shown and applied to road safety data. Two correlated outcome variables considered in this study are number of cars involved in an accident and number of casualties for given number of cars.展开更多
Timely identification and treatment of medical conditions could facilitate faster recovery and better health.Existing systems address this issue using custom-built sensors,which are invasive and difficult to generaliz...Timely identification and treatment of medical conditions could facilitate faster recovery and better health.Existing systems address this issue using custom-built sensors,which are invasive and difficult to generalize.A low-complexity scalable process is proposed to detect and identify medical conditions from 2D skeletal movements on video feed data.Minimal set of features relevant to distinguish medical conditions:AMF,PVF and GDF are derived from skeletal data on sampled frames across the entire action.The AMF(angular motion features)are derived to capture the angular motion of limbs during a specific action.The relative position of joints is represented by PVF(positional variation features).GDF(global displacement features)identifies the direction of overall skeletal movement.The discriminative capability of these features is illustrated by their variance across time for different actions.The classification of medical conditions is approached in two stages.In the first stage,a low-complexity binary LSTM classifier is trained to distinguish visual medical conditions from general human actions.As part of stage 2,a multi-class LSTM classifier is trained to identify the exact medical condition from a given set of visually interpretable medical conditions.The proposed features are extracted from the 2D skeletal data of NTU RGB+D and then used to train the binary and multi-class LSTM classifiers.The binary and multi-class classifiers observed average F1 scores of 77%and 73%,respectively,while the overall system produced an average F1 score of 69%and a weighted average F1 score of 80%.The multi-class classifier is found to utilize 10 to 100 times fewer parameters than existing 2D CNN-based models while producing similar levels of accuracy.展开更多
In this paper,we introduce the censored composite conditional quantile coefficient(cC-CQC)to rank the relative importance of each predictor in high-dimensional censored regression.The cCCQC takes advantage of all usef...In this paper,we introduce the censored composite conditional quantile coefficient(cC-CQC)to rank the relative importance of each predictor in high-dimensional censored regression.The cCCQC takes advantage of all useful information across quantiles and can detect nonlinear effects including interactions and heterogeneity,effectively.Furthermore,the proposed screening method based on cCCQC is robust to the existence of outliers and enjoys the sure screening property.Simulation results demonstrate that the proposed method performs competitively on survival datasets of high-dimensional predictors,particularly when the variables are highly correlated.展开更多
The paper aims to discuss three interesting issues of statistical inferences for a common risk ratio (RR) in sparse meta-analysis data. Firstly, the conventional log-risk ratio estimator encounters a number of problem...The paper aims to discuss three interesting issues of statistical inferences for a common risk ratio (RR) in sparse meta-analysis data. Firstly, the conventional log-risk ratio estimator encounters a number of problems when the number of events in the experimental or control group is zero in sparse data of a 2 × 2 table. The adjusted log-risk ratio estimator with the continuity correction points based upon the minimum Bayes risk with respect to the uniform prior density over (0, 1) and the Euclidean loss function is proposed. Secondly, the interest is to find the optimal weights of the pooled estimate that minimize the mean square error (MSE) of subject to the constraint on where , , . Finally, the performance of this minimum MSE weighted estimator adjusted with various values of points is investigated to compare with other popular estimators, such as the Mantel-Haenszel (MH) estimator and the weighted least squares (WLS) estimator (also equivalently known as the inverse-variance weighted estimator) in senses of point estimation and hypothesis testing via simulation studies. The results of estimation illustrate that regardless of the true values of RR, the MH estimator achieves the best performance with the smallest MSE when the study size is rather large and the sample sizes within each study are small. The MSE of WLS estimator and the proposed-weight estimator adjusted by , or , or are close together and they are the best when the sample sizes are moderate to large (and) while the study size is rather small.展开更多
A profound understanding of the costs to perform condition assessment on buried drinking water pipeline infrastructure is required for enhanced asset management. Toward this end, an automated and uniform method of col...A profound understanding of the costs to perform condition assessment on buried drinking water pipeline infrastructure is required for enhanced asset management. Toward this end, an automated and uniform method of collecting cost data can provide water utilities a means for viewing, understanding, interpreting and visualizing complex geographically referenced cost information to reveal data relationships, patterns and trends. However, there has been no standard data model that allows automated data collection and interoperability across platforms. The primary objective of this research is to develop a standard cost data model for drinking water pipeline condition assessment projects and to conflate disparate datasets from differing utilities. The capabilities of this model will be further demonstrated through performing trend analyses. Field mapping files will be generated from the standard data model and demonstrated in an interactive web map created using Google Maps API (application programming interface) for JavaScript that allows the user to toggle project examples and to perform regional comparisons. The aggregation of standardized data and further use in mapping applications will help in providing timely access to condition assessment cost information and resources that will lead to enhanced asset management and resource allocation for drinking water utilities.展开更多
A Receiver Operating Characteristic(ROC)analysis of a power is important and useful in clinical trials.A Classical Conditional Power(CCP)is a probability of a classical rejection region given values of true treatment ...A Receiver Operating Characteristic(ROC)analysis of a power is important and useful in clinical trials.A Classical Conditional Power(CCP)is a probability of a classical rejection region given values of true treatment effect and interim result.For hypotheses and reversed hypotheses under normal models,we obtain analytical expressions of the ROC curves of the CCP,find optimal ROC curves of the CCP,investigate the superiority of the ROC curves of the CCP,calculate critical values of the False Positive Rate(FPR),True Positive Rate(TPR),and cutoff of the optimal CCP,and give go/no go decisions at the interim of the optimal CCP.In addition,extensive numerical experiments are carried out to exemplify our theoretical results.Finally,a real data example is performed to illustrate the go/no go decisions of the optimal CCP.展开更多
The phasor data concentrator placement(PDCP)in wide area measurement systems(WAMS)is an optimization problem in the communication network planning for power grid.Instead of using the traditional integer linear program...The phasor data concentrator placement(PDCP)in wide area measurement systems(WAMS)is an optimization problem in the communication network planning for power grid.Instead of using the traditional integer linear programming(ILP)based modeling and solution schemes that ignore the graph-related features of WAMS,in this work,the PDCP problem is solved through a heuristic graphbased two-phase procedure(TPP):topology partitioning,and phasor data concentrator(PDC)provisioning.Based on the existing minimum k-section algorithms in graph theory,the k-base topology partitioning algorithm is proposed.To improve the performance,the“center-node-last”pre-partitioning algorithm is proposed to give an initial partition before the k-base partitioning algorithm is applied.Then,the PDC provisioning algorithm is proposed to locate PDCs into the decomposed sub-graphs.The proposed TPP was evaluated on five different IEEE benchmark test power systems and the achieved overall communication performance compared to the ILP based schemes show the validity and efficiency of the proposed method.展开更多
Predicting NO_(x)in the sintering process of iron ore powder in advance was helpful to adjust the denitrification process in time.Taking NO_(x)in the sintering process of iron ore powder as the object,the boxplot,empi...Predicting NO_(x)in the sintering process of iron ore powder in advance was helpful to adjust the denitrification process in time.Taking NO_(x)in the sintering process of iron ore powder as the object,the boxplot,empirical mode decomposition algorithm,Pearson correlation coefficient,maximum information coefficient and other methods were used to preprocess the sintering data and naive Bayes classification algorithm was used to identify the sintering conditions.The regression prediction model with high accuracy and good stability was selected as the sub-model for different sintering conditions,and the sub-models were combined into an integrated prediction model.Based on actual operational data,the approach proved the superiority and effectiveness of the developed model in predicting NO_(x),yielding an accuracy of 96.17%and an absolute error of 5.56,and thereby providing valuable foresight for on-site sintering operations.展开更多
Reliability evaluation for aircraft engines is difficult because of the scarcity of failure data. But aircraft engine data are available from a variety of sources. Data fusion has the function of maximizing the amount...Reliability evaluation for aircraft engines is difficult because of the scarcity of failure data. But aircraft engine data are available from a variety of sources. Data fusion has the function of maximizing the amount of valu- able information extracted from disparate data sources to obtain the comprehensive reliability knowledge. Consid- ering the degradation failure and the catastrophic failure simultaneously, which are competing risks and can affect the reliability, a reliability evaluation model based on data fusion for aircraft engines is developed, Above the characteristics of the proposed model, reliability evaluation is more feasible than that by only utilizing failure data alone, and is also more accurate than that by only considering single failure mode. Example shows the effective- ness of the proposed model.展开更多
The use of hidden conditional random fields (HCRFs) for tone modeling is explored. The tone recognition performance is improved using HCRFs by taking advantage of intra-syllable dynamic, inter-syllable dynamic and d...The use of hidden conditional random fields (HCRFs) for tone modeling is explored. The tone recognition performance is improved using HCRFs by taking advantage of intra-syllable dynamic, inter-syllable dynamic and duration features. When the tone model is integrated into continuous speech recognition, the discriminative model weight training (DMWT) is proposed. Acoustic and tone scores are scaled by model weights discriminatively trained by the minimum phone error (MPE) criterion. Two schemes of weight training are evaluated and a smoothing technique is used to make training robust to overtraining problem. Experiments show that the accuracies of tone recognition and large vocabulary continuous speech recognition (LVCSR) can be improved by the HCRFs based tone model. Compared with the global weight scheme, continuous speech recognition can be improved by the discriminative trained weight combinations.展开更多
It is not reasonable that one can only use the adjoint of model in data assimilation. The simulated numerical experiment shows that for the tidal model, the result of the adjoint of equation is almost the same as that...It is not reasonable that one can only use the adjoint of model in data assimilation. The simulated numerical experiment shows that for the tidal model, the result of the adjoint of equation is almost the same as that of the adjoint of model: the averaged absolute difference of the amplitude between observations and simulation is less than 5.0 cm and that of the phase-lag is less than 5.0°. The results are both in good agreement with the observed M2 tide in the Bohai Sea and the Yellow Sea. For comparison, the traditional methods also have been used to simulate M2 tide in the Bohai Sea and the Yellow Sea. The initial guess values of the boundary conditions are given first, and then are adjusted to acquire the simulated results that are as close as possible to the observations. As the boundary conditions contain 72 values, which should be adjusted and how to adjust them can only be partially solved by adjusting them many times. The satisfied results are hard to acquire even gigantic efforts are done. Here, the automation of the treatment of the open boundary conditions is realized. The method is unique and superior to the traditional methods. It is emphasized that if the adjoint of equation is used, tedious and complicated mathematical deduction can be avoided. Therefore the adjoint of equation should attract much attention.展开更多
The imaging of offset VSP data in local phase space can improve the image of the subsurface structure near the well.In this paper,we present a migration scheme for imaging VSP data in a local phase space,which uses th...The imaging of offset VSP data in local phase space can improve the image of the subsurface structure near the well.In this paper,we present a migration scheme for imaging VSP data in a local phase space,which uses the Gabor-Daubechies tight framebased extrapolator(G-D extrapolator) and its high-frequency asymptotic expansion to extrapolate wavefields and also delineates an improved correlation imaging condition in the local angle domain.The results for migrating synthetic and real VSP data demonstrate that the application of the high-frequency G-D extrapolator asymptotic expansion can effectively decrease computational complexity.The local angle domain correlation imaging condition can be used to weaken migration artifacts without increasing computation.展开更多
基金supported by the National Natural Science Foundation of China(Grant Nos.41772309 and 51908431)the Outstanding Youth Foundation of Hubei Province,China(Grant No.2019CFA074)。
文摘Real-time perception of rock mass information is of great importance to efficient tunneling and hazard prevention in tunnel boring machines(TBMs).In this study,a TBM-rock mutual feedback perception method based on data mining(DM) is proposed,which takes 10 tunneling parameters related to surrounding rock conditions as input features.For implementation,first,the database of TBM tunneling parameters was established,in which 10,807 tunneling cycles from the Songhua River water conveyance tunnel were accommodated.Then,the spectral clustering(SC) algorithm based on graph theory was introduced to cluster the TBM tunneling data.According to the clustering results and rock mass boreability index,the rock mass conditions were classified into four classes,and the reasonable distribution intervals of the main tunneling parameters corresponding to each class were presented.Meanwhile,based on the deep neural network(DNN),the real-time prediction model regarding different rock conditions was established.Finally,the rationality and adaptability of the proposed method were validated via analyzing the tunneling specific energy,feature importance,and training dataset size.The proposed TBM-rock mutual feedback perception method enables the automatic identification of rock mass conditions and the dynamic adjustment of tunneling parameters during TBM driving.Furthermore,in terms of the prediction performance,the method can predict the rock mass conditions ahead of the tunnel face in real time more accurately than the traditional machine learning prediction methods.
文摘Non-vacuum storage condition has a great impact on the explosion characteristics of aluminum powders. In this paper, vacuum-packed flake and globular aluminum powders stored in a dryer after opening the vacuum package are selected as the experimental samples, and a 20 L spherical explosion device is chosen to test the minimum explosible concentration (MEC) values of aluminum dusts under different storage time. The results show that the MEC values of two types of unoxidized aluminum powders are 30 g/m^3. The MEC values of flake and globular aluminum powders firstly go up with the increase of storage time in the dryer and then reach the maximum values of 50 g/m^3 and 60 g/m^3 at respective storage time until finally they stabilize gradually. The main reason is that the oxidation rate is faster owing to the bigger specific surface area of globular aluminum powders. Hence, the storage time has more significant effect on the MEC of globular aluminum powder than that of flake aluminum powder. After a period of time, the outer surface is oxidized to generate a layer of film, which prevents the further oxidation of aluminum powder, resulting in the temporary stability of MEC.
文摘Based on two kinds of proxy data, a tree-ring width chronology at Huashan and the wetness/dryness grade series around Xi'an in north-centralChina, thes presat study demonstrates how different types of proxy climaterecords can be combined to give a more reliable estimate of past climate thaneither record can be done individually. With comparison and correction of thetwo data sets, various statistical models can be developed from individual andcombined senes. Among them, the best combined model produced by theconditional quantile adjustmat method can be selected for reconstruction ofApril-July rainfall at Huashan back to 1600 A.D.
基金National Natural Science Foundation of China (Grant No. 49575269) National Key Basic Research on the Formation Mechanism and
文摘An investigation is carried out on the problem involved in 4D variational data assimilation (VDA) with constraint conditions based on a finite-element shallow-water equation model. In the investigation, the adjoint technology, penalty method and augmented Lagrangian method are used in constraint optimization field to minimize the defined constraint objective functions. The results of the numerical experiments show that the optimal solutions are obtained if the functions reach the minima. VDA with constraint conditions controlling the growth of gravity oscillations is efficient to eliminate perturbation and produces optimal initial field. It seems that this method can also be applied to the problem in numerical weather prediction. Key words Variational data assimilation - Constraint conditions - Penalty methods - finite-element model This research is supported by National Natural Science Foundation of China (Grant No. 49575269) and by National Key Basic Research on the Formation Mechanism and Prediction Theory of Severe Synoptic Disasters (Grant No. G1998040910).
基金funded by the Cora Topolewski Cardiac Research Fund at the Children’s Hospital of Philadelphia(CHOP)the Pediatric Valve Center Frontier Program at CHOP+4 种基金the Additional Ventures Single Ventricle Research Fund Expansion Awardthe National Institutes of Health(USA)supported by the program(Nos.NHLBI T32 HL007915 and NIH R01 HL153166)supported by the program(No.NIH R01 HL153166)supported by the U.S.Department of Energy(No.DE-SC0022953)。
文摘Material identification is critical for understanding the relationship between mechanical properties and the associated mechanical functions.However,material identification is a challenging task,especially when the characteristic of the material is highly nonlinear in nature,as is common in biological tissue.In this work,we identify unknown material properties in continuum solid mechanics via physics-informed neural networks(PINNs).To improve the accuracy and efficiency of PINNs,we develop efficient strategies to nonuniformly sample observational data.We also investigate different approaches to enforce Dirichlet-type boundary conditions(BCs)as soft or hard constraints.Finally,we apply the proposed methods to a diverse set of time-dependent and time-independent solid mechanic examples that span linear elastic and hyperelastic material space.The estimated material parameters achieve relative errors of less than 1%.As such,this work is relevant to diverse applications,including optimizing structural integrity and developing novel materials.
基金supported by the Swiss National Science Foundation
文摘Geophysical techniques can help to bridge the inherent gap that exists with regard to spatial resolution and coverage for classical hydrological methods. This has led to the emergence of a new and rapidly growing research domain generally referred to as hydrogeophysics. Given the differing sensitivities of various geophysical techniques to hydrologically relevant parameters, their inherent trade-off between resolution and range, as well as the notoriously site-specific nature of petrophysical parameter relations, the fundamental usefulness of multi-method surveys for reducing uncertainties in data analysis and interpretation is widely accepted. A major challenge arising from such endeavors is the quantitative integration of the resulting vast and diverse database into a unified model of the probed subsurface region that is consistent with all available measurements. To this end, we present a novel approach toward hydrogeophysical data integration based on a Monte-Carlo-type conditional stochastic simulation method that we consider to be particularly suitable for high-resolution local-scale studies. Monte Carlo techniques are flexible and versatile, allowing for accounting for a wide variety of data and constraints of differing resolution and hardness, and thus have the potential of providing, in a geostatistical sense, realistic models of the pertinent target parameter distributions. Compared to more conventional approaches, such as co-kriging or cluster analysis, our approach provides significant ad- vancements in the way that larger-scale structural information eontained in the hydrogeophysieal data can be accounted for. After outlining the methodological background of our algorithm, we present the results of its application to the integration of porosity log and tomographic crosshole georadar data to generate stochastic realizations of the detailed local-scale porosity structure. Our procedure is first tested on pertinent synthetic data and then applied to a field dataset collected at the Boise Hydrogeophysical Research Site. Finally, we compare the performance of our data integration approach to that of more conventional methods with regard to the prediction of flow and transport phenomena in highly heterogeneous media and discuss the implications arising.
文摘Our purpose is twofold: to present a prototypical example of the conditioning technique to obtain the best estimator of a parameter and to show that th</span><span style="font-family:Verdana;">is technique resides in the structure of an inner product space. Th</span><span style="font-family:Verdana;">e technique uses conditioning </span></span><span style="font-family:Verdana;">of</span><span style="font-family:Verdana;"> an unbiased estimator </span><span style="font-family:Verdana;">on</span><span style="font-family:Verdana;"> a sufficient statistic. This procedure is founded upon the conditional variance formula, which leads to an inner product space and a geometric interpretation. The example clearly illustrates the dependence on the sampling methodology. These advantages show the power and centrality of this process.
文摘This paper proposes some additional moment conditions for the linear feedback model with explanatory variables being predetermined, which is proposed by [1] for the purpose of dealing with count panel data. The newly proposed moment conditions include those associated with the equidispersion, the Negbin I-type model and the stationarity. The GMM estimators are constructed incorporating the additional moment conditions. Some Monte Carlo experiments indicate that the GMM estimators incorporating the additional moment conditions perform well, compared to that using only the conventional moment conditions proposed by [2,3].
文摘A new covariate dependent zero-truncated bivariate Poisson model is proposed in this paper employing generalized linear model. A marginal-conditional approach is used to show the bivariate model. The proposed model with estimation procedure and tests for goodness-of-fit and under (or over) dispersion are shown and applied to road safety data. Two correlated outcome variables considered in this study are number of cars involved in an accident and number of casualties for given number of cars.
文摘Timely identification and treatment of medical conditions could facilitate faster recovery and better health.Existing systems address this issue using custom-built sensors,which are invasive and difficult to generalize.A low-complexity scalable process is proposed to detect and identify medical conditions from 2D skeletal movements on video feed data.Minimal set of features relevant to distinguish medical conditions:AMF,PVF and GDF are derived from skeletal data on sampled frames across the entire action.The AMF(angular motion features)are derived to capture the angular motion of limbs during a specific action.The relative position of joints is represented by PVF(positional variation features).GDF(global displacement features)identifies the direction of overall skeletal movement.The discriminative capability of these features is illustrated by their variance across time for different actions.The classification of medical conditions is approached in two stages.In the first stage,a low-complexity binary LSTM classifier is trained to distinguish visual medical conditions from general human actions.As part of stage 2,a multi-class LSTM classifier is trained to identify the exact medical condition from a given set of visually interpretable medical conditions.The proposed features are extracted from the 2D skeletal data of NTU RGB+D and then used to train the binary and multi-class LSTM classifiers.The binary and multi-class classifiers observed average F1 scores of 77%and 73%,respectively,while the overall system produced an average F1 score of 69%and a weighted average F1 score of 80%.The multi-class classifier is found to utilize 10 to 100 times fewer parameters than existing 2D CNN-based models while producing similar levels of accuracy.
基金Outstanding Youth Foundation of Hunan Provincial Department of Education(Grant No.22B0911)。
文摘In this paper,we introduce the censored composite conditional quantile coefficient(cC-CQC)to rank the relative importance of each predictor in high-dimensional censored regression.The cCCQC takes advantage of all useful information across quantiles and can detect nonlinear effects including interactions and heterogeneity,effectively.Furthermore,the proposed screening method based on cCCQC is robust to the existence of outliers and enjoys the sure screening property.Simulation results demonstrate that the proposed method performs competitively on survival datasets of high-dimensional predictors,particularly when the variables are highly correlated.
文摘The paper aims to discuss three interesting issues of statistical inferences for a common risk ratio (RR) in sparse meta-analysis data. Firstly, the conventional log-risk ratio estimator encounters a number of problems when the number of events in the experimental or control group is zero in sparse data of a 2 × 2 table. The adjusted log-risk ratio estimator with the continuity correction points based upon the minimum Bayes risk with respect to the uniform prior density over (0, 1) and the Euclidean loss function is proposed. Secondly, the interest is to find the optimal weights of the pooled estimate that minimize the mean square error (MSE) of subject to the constraint on where , , . Finally, the performance of this minimum MSE weighted estimator adjusted with various values of points is investigated to compare with other popular estimators, such as the Mantel-Haenszel (MH) estimator and the weighted least squares (WLS) estimator (also equivalently known as the inverse-variance weighted estimator) in senses of point estimation and hypothesis testing via simulation studies. The results of estimation illustrate that regardless of the true values of RR, the MH estimator achieves the best performance with the smallest MSE when the study size is rather large and the sample sizes within each study are small. The MSE of WLS estimator and the proposed-weight estimator adjusted by , or , or are close together and they are the best when the sample sizes are moderate to large (and) while the study size is rather small.
文摘A profound understanding of the costs to perform condition assessment on buried drinking water pipeline infrastructure is required for enhanced asset management. Toward this end, an automated and uniform method of collecting cost data can provide water utilities a means for viewing, understanding, interpreting and visualizing complex geographically referenced cost information to reveal data relationships, patterns and trends. However, there has been no standard data model that allows automated data collection and interoperability across platforms. The primary objective of this research is to develop a standard cost data model for drinking water pipeline condition assessment projects and to conflate disparate datasets from differing utilities. The capabilities of this model will be further demonstrated through performing trend analyses. Field mapping files will be generated from the standard data model and demonstrated in an interactive web map created using Google Maps API (application programming interface) for JavaScript that allows the user to toggle project examples and to perform regional comparisons. The aggregation of standardized data and further use in mapping applications will help in providing timely access to condition assessment cost information and resources that will lead to enhanced asset management and resource allocation for drinking water utilities.
基金supported by the National Social Science Fund of China(Grand No.21XTJ001).
文摘A Receiver Operating Characteristic(ROC)analysis of a power is important and useful in clinical trials.A Classical Conditional Power(CCP)is a probability of a classical rejection region given values of true treatment effect and interim result.For hypotheses and reversed hypotheses under normal models,we obtain analytical expressions of the ROC curves of the CCP,find optimal ROC curves of the CCP,investigate the superiority of the ROC curves of the CCP,calculate critical values of the False Positive Rate(FPR),True Positive Rate(TPR),and cutoff of the optimal CCP,and give go/no go decisions at the interim of the optimal CCP.In addition,extensive numerical experiments are carried out to exemplify our theoretical results.Finally,a real data example is performed to illustrate the go/no go decisions of the optimal CCP.
基金supported by the National Key Research and Development Program of China(2023YFB 2906403).
文摘The phasor data concentrator placement(PDCP)in wide area measurement systems(WAMS)is an optimization problem in the communication network planning for power grid.Instead of using the traditional integer linear programming(ILP)based modeling and solution schemes that ignore the graph-related features of WAMS,in this work,the PDCP problem is solved through a heuristic graphbased two-phase procedure(TPP):topology partitioning,and phasor data concentrator(PDC)provisioning.Based on the existing minimum k-section algorithms in graph theory,the k-base topology partitioning algorithm is proposed.To improve the performance,the“center-node-last”pre-partitioning algorithm is proposed to give an initial partition before the k-base partitioning algorithm is applied.Then,the PDC provisioning algorithm is proposed to locate PDCs into the decomposed sub-graphs.The proposed TPP was evaluated on five different IEEE benchmark test power systems and the achieved overall communication performance compared to the ILP based schemes show the validity and efficiency of the proposed method.
基金financially supported by the Natural Science Basic foundation of China(Program No.52174325)the Key Research and Development Program of Shaanxi(Grant No.2020GY-166 and Program No.2020GY-247)the Shaanxi Provincial Innovation Capacity Support Plan(Grant No.2023-CX-TD-53).
文摘Predicting NO_(x)in the sintering process of iron ore powder in advance was helpful to adjust the denitrification process in time.Taking NO_(x)in the sintering process of iron ore powder as the object,the boxplot,empirical mode decomposition algorithm,Pearson correlation coefficient,maximum information coefficient and other methods were used to preprocess the sintering data and naive Bayes classification algorithm was used to identify the sintering conditions.The regression prediction model with high accuracy and good stability was selected as the sub-model for different sintering conditions,and the sub-models were combined into an integrated prediction model.Based on actual operational data,the approach proved the superiority and effectiveness of the developed model in predicting NO_(x),yielding an accuracy of 96.17%and an absolute error of 5.56,and thereby providing valuable foresight for on-site sintering operations.
基金Supported by the National Natural Science Foundation of China and Aviation Fund(60879001)the Natural Science Foundation of Jiangsu Province(BK2009378)+1 种基金the Fundamental Research Fund of Nanjing University of Aeronautics and Astronautics(NS2010179)the Qinglan Project of Jiangsu Province~~
文摘Reliability evaluation for aircraft engines is difficult because of the scarcity of failure data. But aircraft engine data are available from a variety of sources. Data fusion has the function of maximizing the amount of valu- able information extracted from disparate data sources to obtain the comprehensive reliability knowledge. Consid- ering the degradation failure and the catastrophic failure simultaneously, which are competing risks and can affect the reliability, a reliability evaluation model based on data fusion for aircraft engines is developed, Above the characteristics of the proposed model, reliability evaluation is more feasible than that by only utilizing failure data alone, and is also more accurate than that by only considering single failure mode. Example shows the effective- ness of the proposed model.
文摘The use of hidden conditional random fields (HCRFs) for tone modeling is explored. The tone recognition performance is improved using HCRFs by taking advantage of intra-syllable dynamic, inter-syllable dynamic and duration features. When the tone model is integrated into continuous speech recognition, the discriminative model weight training (DMWT) is proposed. Acoustic and tone scores are scaled by model weights discriminatively trained by the minimum phone error (MPE) criterion. Two schemes of weight training are evaluated and a smoothing technique is used to make training robust to overtraining problem. Experiments show that the accuracies of tone recognition and large vocabulary continuous speech recognition (LVCSR) can be improved by the HCRFs based tone model. Compared with the global weight scheme, continuous speech recognition can be improved by the discriminative trained weight combinations.
文摘It is not reasonable that one can only use the adjoint of model in data assimilation. The simulated numerical experiment shows that for the tidal model, the result of the adjoint of equation is almost the same as that of the adjoint of model: the averaged absolute difference of the amplitude between observations and simulation is less than 5.0 cm and that of the phase-lag is less than 5.0°. The results are both in good agreement with the observed M2 tide in the Bohai Sea and the Yellow Sea. For comparison, the traditional methods also have been used to simulate M2 tide in the Bohai Sea and the Yellow Sea. The initial guess values of the boundary conditions are given first, and then are adjusted to acquire the simulated results that are as close as possible to the observations. As the boundary conditions contain 72 values, which should be adjusted and how to adjust them can only be partially solved by adjusting them many times. The satisfied results are hard to acquire even gigantic efforts are done. Here, the automation of the treatment of the open boundary conditions is realized. The method is unique and superior to the traditional methods. It is emphasized that if the adjoint of equation is used, tedious and complicated mathematical deduction can be avoided. Therefore the adjoint of equation should attract much attention.
基金supported by the National Hi-Tech Research and Development Program of China (Grant No.2006AA09A102-11)the National Natural Science Fund of China (Grant No.40730424 and 40674064)
文摘The imaging of offset VSP data in local phase space can improve the image of the subsurface structure near the well.In this paper,we present a migration scheme for imaging VSP data in a local phase space,which uses the Gabor-Daubechies tight framebased extrapolator(G-D extrapolator) and its high-frequency asymptotic expansion to extrapolate wavefields and also delineates an improved correlation imaging condition in the local angle domain.The results for migrating synthetic and real VSP data demonstrate that the application of the high-frequency G-D extrapolator asymptotic expansion can effectively decrease computational complexity.The local angle domain correlation imaging condition can be used to weaken migration artifacts without increasing computation.