We used the geological map and published rock density measurements to compile the digital rock density model for the Hong Kong territories.We then estimated the average density for the whole territory.According to our...We used the geological map and published rock density measurements to compile the digital rock density model for the Hong Kong territories.We then estimated the average density for the whole territory.According to our result,the rock density values in Hong Kong vary from 2101 to 2681 kg·m^(-3).These density values are typically smaller than the average density of 2670 kg·m^(-3),often adopted to represent the average density of the upper continental crust in physical geodesy and gravimetric geophysics applications.This finding reflects that the geological configuration in Hong Kong is mainly formed by light volcanic formations and lava flows with overlying sedimentary deposits at many locations,while the percentage of heavier metamorphic rocks is very low(less than 1%).This product will improve the accuracy of a detailed geoid model and orthometric heights.展开更多
The microstructure evolution of 7A85 aluminum alloy at the conditions of strain rate(0.001−1 s^(−1))and deformation temperature(250−450°C)was studied by optical microscopy(OM)and electron back scattering diffract...The microstructure evolution of 7A85 aluminum alloy at the conditions of strain rate(0.001−1 s^(−1))and deformation temperature(250−450°C)was studied by optical microscopy(OM)and electron back scattering diffraction(EBSD).Based on the K-M dislocation density model,a two-stage K-M dislocation density model of 7A85 aluminum alloy was established.The results reveal that dynamic recovery(DRV)and dynamic recrystallization(DRX)are the main mechanisms of microstructure evolution during thermal deformation of 7A85 aluminum alloy.350−400°C is the transformation zone from dynamic recovery to dynamic recrystallization.At low temperature(≤350°C),DRV is the main mechanism,while DRX mostly occurs at high temperature(≥400°C).At this point,the sensitivity of microstructure evolution to temperature is relatively high.As the temperature increased,the average misorientation angle(θˉ_(c))increased significantly,ranging from 0.93°to 7.13°.Meanwhile,the f_(LAGBs) decreased with the highest decrease of 24%.展开更多
From 1997 to 2000, four field surveys were conducted in the East China Sea (ECS) (23°30'-33°00'N, 118°30'-128°00'E). A field data yield density model was used to determine the optimal salin...From 1997 to 2000, four field surveys were conducted in the East China Sea (ECS) (23°30'-33°00'N, 118°30'-128°00'E). A field data yield density model was used to determine the optimal salinities for 19 dominant copepod species to establish the relationship between surface salinities and abundance of those species. In addition, ecological groups of the copepods were classified based on optimal salinity and geographical distribution. The results indicate that the yield density model is suitable for determining the relationship between salinity and abundance. Cosmocalanus darwini, Euchaeta rimana, Pleuromamma gracilis, Rhincalanus cornutus, Scolecithrix danae and Pareucalanus attenuatus were determined as oceanic species, with optimal salinities of 〉34.0. They were stenohaline and mainly distributed in waters influenced by the Kuroshio or Taiwan warm current. Temora discaudata, T. stylifera and Canthocalanus pauper were nearshore species with optimal salinities of 〈33.0 and most abundant in coastal waters. The remaining 10 species, including Undinula vulgaris and Subeucalanus suberassus, were offshore species, with optimal salinity ranging from 33.0-34.0. They were widely distributed in nearshore, offshore and oceanic waters but mainly in the mixed water of the ECS.展开更多
Large eddy simulations(LESs) are performed to investigate the Cambridge premixed and stratified flames, SwB1 and SwB5, respectively. The flame surface density(FSD) model incorporated with two different wrinkling facto...Large eddy simulations(LESs) are performed to investigate the Cambridge premixed and stratified flames, SwB1 and SwB5, respectively. The flame surface density(FSD) model incorporated with two different wrinkling factor models, i.e., the Muppala and Charlette2 wrinkling factor models, is used to describe combustion/turbulence interaction, and the flamelet generated manifolds(FGM) method is employed to determine major scalars. This coupled sub-grid scale(SGS) combustion model is named as the FSD-FGM model. The FGM method can provide the detailed species in the flame which cannot be obtained from the origin FSD model. The LES results show that the FSD-FGM model has the ability of describing flame propagation, especially for stratified flames. The Charlette2 wrinkling factor model performs better than the Muppala wrinkling factor model in predicting the flame surface area change by the turbulence.The combustion characteristics are analyzed in detail by the flame index and probability distributions of the equivalence ratio and the orientation angle, which confirms that for the investigated stratified flame, the dominant combustion modes in the upstream and downstream regions are the premixed mode and the back-supported mode, respectively.展开更多
For satellites in orbits,most perturbations can be well modeled;however the inaccuracy of the atmospheric density model remains the biggest error source in orbit determination and prediction.The commonly used empirica...For satellites in orbits,most perturbations can be well modeled;however the inaccuracy of the atmospheric density model remains the biggest error source in orbit determination and prediction.The commonly used empirical atmospheric density models,such as Jacchia,NRLMSISE,DTM,and Russian GOST,still have a relative error of about 10%-30%.Because of the uncertainty in the atmospheric density distribution,high accuracy estimation of the atmospheric density cannot be achieved using a deterministic model.A better way to improve the accuracy is to calibrate the model with updated measurements.Twoline element(TLE)sets provide accessible orbital data,which can be used in the model calibration.In this paper,an algorithm for calibrating the atmospheric density model is developed.First,the density distribution of the atmosphere is represented by a power series expansion whose coefficients are denoted by the spherical harmonic expansions.Then,the useful historical TLE data are selected.The ballistic coefficients of the objects are estimated using the BSTAR data in TLEs,and the parameterized model is calibrated by solving a nonlinear least squares problem.Simulation results show that the prediction error is reduced using the proposed calibration algorithm.展开更多
An improved method using kernel density estimation (KDE) and confidence level is presented for model validation with small samples. Decision making is a challenging problem because of input uncertainty and only smal...An improved method using kernel density estimation (KDE) and confidence level is presented for model validation with small samples. Decision making is a challenging problem because of input uncertainty and only small samples can be used due to the high costs of experimental measurements. However, model validation provides more confidence for decision makers when improving prediction accuracy at the same time. The confidence level method is introduced and the optimum sample variance is determined using a new method in kernel density estimation to increase the credibility of model validation. As a numerical example, the static frame model validation challenge problem presented by Sandia National Laboratories has been chosen. The optimum bandwidth is selected in kernel density estimation in order to build the probability model based on the calibration data. The model assessment is achieved using validation and accreditation experimental data respectively based on the probability model. Finally, the target structure prediction is performed using validated model, which are consistent with the results obtained by other researchers. The results demonstrate that the method using the improved confidence level and kernel density estimation is an effective approach to solve the model validation problem with small samples.展开更多
The methods of deriving FeO and TiO_(2)contents from the Clementine spacecraft data were discussed,and an approach was developed to derive the content from the measurements using the Moon Mineralogy Mapper(M3)instrume...The methods of deriving FeO and TiO_(2)contents from the Clementine spacecraft data were discussed,and an approach was developed to derive the content from the measurements using the Moon Mineralogy Mapper(M3)instrument on Chandrayaan-1.The density of lunar bedrock was then modeled on the basis of the derived FeO and TiO_(2)abundances.The FeO and TiO_(2)abundances derived from the M^(3)data were compared with the previous results of the Clementine data and were in good agreement.The FeO abundance data also agreed well with the Lunar Prospector data,which were used as an independent source.The previous Clementine and newly M3 derived abundances were compared with the laboratory measured FeO and TiO2 contents in the Apollo and Luna returned samples.The Clementine derived FeO content was systematically 1%–2%lower than the laboratory measurements in all the returned samples.The M^(3)derived content agreed well with the returned Apollo samples and was within±2.8%of the laboratory measurements.The Clementine derived TiO2 abundance was systematically 0.1%–4%higher than the laboratory measurements of the returned samples.The M3 derived TiO_(2)agreed well(±0.6%)with the laboratory measurements of the returned samples,except for samples with high TiO2 content.However,these results should be carefully interpreted because the error range requires verification.No error analysis was provided with the previous Clementine derived contents.展开更多
In this paper,we consider testing the hypothesis concerning the means of two independent semicontinuous distributions whose observations are zero-inflated,characterized by a sizable number of zeros and positive observ...In this paper,we consider testing the hypothesis concerning the means of two independent semicontinuous distributions whose observations are zero-inflated,characterized by a sizable number of zeros and positive observations from a continuous distribution.The continuous parts of the two semicontinuous distributions are assumed to follow a density ratio model.A new two-part test is developed for this kind of data.The proposed test takes the sum of one test for equality of proportions of zero values and one conditional test for the continuous distribution.The test is proved to follow a2 distribution with two degrees of freedom.Simulation studies show that the proposed test controls the type I error rates at the desired level,and is competitive to,and most of the time more powerful than two popular tests.A real data example from a dietary intervention study is used to illustrate the usefulness of the proposed test.展开更多
Turbulent gas-particle flows are studied by a kinetic description using a prob- ability density function (PDF). Unlike other investigators deriving the particle Reynolds stress equations using the PDF equations, the...Turbulent gas-particle flows are studied by a kinetic description using a prob- ability density function (PDF). Unlike other investigators deriving the particle Reynolds stress equations using the PDF equations, the particle PDF transport equations are di- rectly solved either using a finite-difference method for two-dimensional (2D) problems or using a Monte-Carlo (MC) method for three-dimensional (3D) problems. The proposed differential stress model together with the PDF (DSM-PDF) is used to simulate turbulent swirling gas-particle flows. The simulation results are compared with the experimental results and the second-order moment (SOM) two-phase modeling results. All of these simulation results are in agreement with the experimental results, implying that the PDF approach validates the SOM two-phase turbulence modeling. The PDF model with the SOM-MC method is used to simulate evaporating gas-droplet flows, and the simulation results are in good agreement with the experimental results.展开更多
This paper focuses on resolving the identification problem of a neuro-fuzzy model(NFM) applied in batch processes. A hybrid learning algorithm is introduced to identify the proposed NFM with the idea of auxiliary erro...This paper focuses on resolving the identification problem of a neuro-fuzzy model(NFM) applied in batch processes. A hybrid learning algorithm is introduced to identify the proposed NFM with the idea of auxiliary error model and the identification principle based on the probability density function(PDF). The main contribution is that the NFM parameter updating approach is transformed into the shape control for the PDF of modeling error. More specifically, a virtual adaptive control system is constructed with the aid of the auxiliary error model and then the PDF shape control idea is used to tune NFM parameters so that the PDF of modeling error is controlled to follow a targeted PDF, which is in Gaussian or uniform distribution. Examples are used to validate the applicability of the proposed method and comparisons are made with the minimum mean square error based approaches.展开更多
We investigate the topological phase transition driven by non-local electronic correlations in a realistic quantum anomalous Hall model consisting of d_(xy)–d_(x^(2)-y^(2)) orbitals. Three topologically distinct phas...We investigate the topological phase transition driven by non-local electronic correlations in a realistic quantum anomalous Hall model consisting of d_(xy)–d_(x^(2)-y^(2)) orbitals. Three topologically distinct phases defined in the noninteracting limit evolve to different charge density wave phases under correlations. Two conspicuous conclusions were obtained: The topological phase transition does not involve gap-closing and the dynamical fluctuations significantly suppress the charge order favored by the next nearest neighbor interaction. Our study sheds light on the stability of topological phase under electronic correlations, and we demonstrate a positive role played by dynamical fluctuations that is distinct to all previous studies on correlated topological states.展开更多
In this paper, we report a method by which the ion quantity is estimated rapidly with an accuracy of 4%. This finding is based on the low-temperature ion density theory and combined with the ion crystal size obtained ...In this paper, we report a method by which the ion quantity is estimated rapidly with an accuracy of 4%. This finding is based on the low-temperature ion density theory and combined with the ion crystal size obtained from experiment with the precision of a micrometer. The method is objective, straightforward, and independent of the molecular dynamics (MD) simulation. The result can be used as the reference for the MD simulation, and the method can improve the reliability and precision of MD simulation. This method is very helpful for intensively studying ion crystal, such as phase transition, spatial configuration, temporal evolution, dynamic character, cooling efficiency, and the temperature limit of the ions.展开更多
This paper reviews differences between the deterministic(sharp and diffuse)and statistical models of the interphase region between the two-phases.In the literature this region is usually referred to as the(macroscopic...This paper reviews differences between the deterministic(sharp and diffuse)and statistical models of the interphase region between the two-phases.In the literature this region is usually referred to as the(macroscopic)interface.Therein,the mesoscopic interface that is defined at the molecular level and agitated by the thermal fluctuations is found with nonzero probability.For this reason,in this work,the interphase region is called the mesoscopic intermittency/transition region.To this purpose,the first part of the present work gives the rationale for introduction of the mesoscopic intermittency region statistical model.It is argued that classical(deterministic)sharp and diffuse models do not explain the experimental and numerical results presented in the literature.Afterwards,it is elucidated that a statistical model of the mesoscopic intermittency region(SMIR)combines existing sharp and diffuse models into a single coherent framework and explains published experimental and numerical results.In the second part of the present paper,the SMIR is used for the first time to predict equilibrium and nonequilibrium two-phase flow in the numerical simulation.To this goal,a two-dimensional rising gas bubble is studied;obtained numerical results are used as a basis to discuss differences between the deterministic and statistical models showing the statistical description has a potential to account for the physical phenomena not previously considered in the computer simulations.展开更多
Heavy-medium cyclones are widely used to upgrade run-of-mine coal.But the understanding of flow in a cyclone containing a dense medium is still incomplete.By introducing turbulent diffusion into calculations of centri...Heavy-medium cyclones are widely used to upgrade run-of-mine coal.But the understanding of flow in a cyclone containing a dense medium is still incomplete.By introducing turbulent diffusion into calculations of centrifugal settling a theoretical distribution function giving the density field can be deduced.Qualitative analysis of the density field in every part of a cylindrical cyclone suggests an optimum design that has exhibited good separation effectiveness and anti-wear performance when in commercial operation.展开更多
Furrow irrigation is a traditional widely-used irrigation method in the world. Understanding the dynamics of soil water distribution is essential to developing effective furrow irrigation strategies, especially in wat...Furrow irrigation is a traditional widely-used irrigation method in the world. Understanding the dynamics of soil water distribution is essential to developing effective furrow irrigation strategies, especially in water-limited regions. The objectives of this study are to analyze root length density distribution and to explore soil water dynamics by simulating soil water content using a HYDRUS-2D model with consideration of root water uptake for furrow irrigated tomato plants in a solar greenhouse in Northwest China. Soil water contents were also in-situ observed by the ECH_2O sensors from 4 June to 19 June and from 21 June to 4 July, 2012. Results showed that the root length density of tomato plants was concentrated in the 0–50 cm soil layers, and radiated 0–18 cm toward the furrow and 0–30 cm along the bed axis. Soil water content values simulated by the HYDRUS-2D model agreed well with those observed by the ECH_2O sensors, with regression coefficient of 0.988, coefficient of determination of 0.89, and index of agreement of 0.97. The HYDRUS-2D model with the calibrated parameters was then applied to explore the optimal irrigation scheduling. Infrequent irrigation with a large amount of water for each irrigation event could result in 10%–18% of the irrigation water losses. Thus we recommend high irrigation frequency with a low amount of water for each irrigation event in greenhouses for arid region. The maximum high irrigation amount and the suitable irrigation interval required to avoid plant water stress and drainage water were 34 mm and 6 days, respectively, for given daily average transpiration rate of 4.0 mm/d. To sum up, the HYDRUS-2D model with consideration of root water uptake can be used to improve irrigation scheduling for furrow irrigated tomato plants in greenhouses in arid regions.展开更多
The cavitation cloud of different internal structures results in different collapse pressures owing to the interaction among bubbles. The internal structure of cloud cavitation is required to accurately predict collap...The cavitation cloud of different internal structures results in different collapse pressures owing to the interaction among bubbles. The internal structure of cloud cavitation is required to accurately predict collapse pressure. A cavitation model was developed through dimensional analysis and direct numerical simulation of collapse of bubble cluster. Bubble number density was included in proposed model to characterize the internal structure of bubble cloud. Implemented on flows over a projectile, the proposed model predicts a higher collapse pressure compared with Singhal model. Results indicate that the collapse pressure of detached cavitation cloud is affected by bubble number density.展开更多
Ni0.35Zn0.65Fe2O4 ferrite was synthesized by SHS method. In the process of SHS, combustion temperature and velocity were the main process parameters , which were decided by the Fe content, grain size of the ferrite po...Ni0.35Zn0.65Fe2O4 ferrite was synthesized by SHS method. In the process of SHS, combustion temperature and velocity were the main process parameters , which were decided by the Fe content, grain size of the ferrite powder, relative density and the oxygen pressure. In this paper the effects of Fe content, grain size and oxygen pressure on combustion temperature and velocity were discussed. The relation between combustion temperature and magnetic permeability was also studied and the method of polynomial regression was used to establish the mathematical model of the relation.展开更多
It is understood that the forward-backward probability hypothesis density (PHD) smoothing algorithms proposed recently can significantly improve state estimation of targets. However, our analyses in this paper show ...It is understood that the forward-backward probability hypothesis density (PHD) smoothing algorithms proposed recently can significantly improve state estimation of targets. However, our analyses in this paper show that they cannot give a good cardinality (i.e., the number of targets) estimate. This is because backward smoothing ignores the effect of temporary track drop- ping caused by forward filtering and/or anomalous smoothing resulted from deaths of targets. To cope with such a problem, a novel PHD smoothing algorithm, called the variable-lag PHD smoother, in which a detection process used to identify whether the filtered cardinality varies within the smooth lag is added before backward smoothing, is developed here. The analytical results show that the proposed smoother can almost eliminate the influences of temporary track dropping and anomalous smoothing, while both the cardinality and the state estimations can significantly be improved. Simulation results on two multi-target tracking scenarios verify the effectiveness of the proposed smoother.展开更多
This paper examines the forecasting performance of different kinds of GARCH model (GRACH, EGARCH, TARCH and APARCH) under the Normal, Student-t and Generalized error distributional assumption. We compare the effect ...This paper examines the forecasting performance of different kinds of GARCH model (GRACH, EGARCH, TARCH and APARCH) under the Normal, Student-t and Generalized error distributional assumption. We compare the effect of different distributional assumption on the GARCH models. The data we analyze are the daily stocks indexes for Shenzhen Stock Exchange (SSE) in China from April 3^rd, 1991 to April 14^th, 2005. We find that improvements of the overall estimation are achieved when asymmetric GARCH models are used with student-t distribution and generalized error distribution. Moreover, it is found that TARCH and GARCH models give better forecasting performance than EGARCH and APARCH models. In forecasting performance, the model under normal distribution gives more accurate forecasting performance than non-normal densities and generalized error distributions clearly outperform the student-t densities in case of SSE.展开更多
Land use/cover change(LUCC)is a measure that offers insights into the interaction between human activities and the natural environment,which significantly impacts the ecological environment of a region.Based on data f...Land use/cover change(LUCC)is a measure that offers insights into the interaction between human activities and the natural environment,which significantly impacts the ecological environment of a region.Based on data from the period from 2000 to 2020 regarding land use,topography,climate,the economy,and population,this study investigates the spatial and temporal evolution of land use in the Liuchong River Basin,examining the inte-raction between human activities and the natural environment using the land use dynamics model,the transfer matrix model,the kernel density model,and the geodetic detector.The results indicate that:(1)The type of land cover in Liuchong River Basin primarily comprises cropland,forest,and shrubs,with the land use change mode mainly consisting of an increase in the impervious area and a decrease in surface area covered by shrubs.(2)The dynamic degree for single land use of barren,impervious,and waters indicates a significant increase,with areas covered by shrubs decreasing by 9.37%.In addition,the change in the degree of single land use for other types of cover is more stable,with the degree of comprehensive land use being 7.95%.The areas experiencing the greatest land use change in the watershed went through conditions that can be described as“sporadic distribution”to“dis-persed”to“relatively concentrated”.(3)Air temperature,rainfall,and elevation are important factors driving land use changes in the Liuchong River Basin.The impact of nighttime lighting,gross domestic product(GDP),and norma-lized difference vegetation index(NDVI)on land use change have gradually increased over time.The results of the interaction detection indicated that the explanatory power of the interaction between the driving factors in each pe-riod for land-use changes was always greater than that of any single factor.The results of this study offer evi-dence-based support and scientific references for spatial planning,soil and water conservation,and ecological restoration in a watershed.展开更多
基金supported by the Hong Kong GRF RGC project 15217222:“Modernization of the leveling network in the Hong Kong territories.”。
文摘We used the geological map and published rock density measurements to compile the digital rock density model for the Hong Kong territories.We then estimated the average density for the whole territory.According to our result,the rock density values in Hong Kong vary from 2101 to 2681 kg·m^(-3).These density values are typically smaller than the average density of 2670 kg·m^(-3),often adopted to represent the average density of the upper continental crust in physical geodesy and gravimetric geophysics applications.This finding reflects that the geological configuration in Hong Kong is mainly formed by light volcanic formations and lava flows with overlying sedimentary deposits at many locations,while the percentage of heavier metamorphic rocks is very low(less than 1%).This product will improve the accuracy of a detailed geoid model and orthometric heights.
基金Project(51675465)supported by the National Natural Science Foundation of ChinaProject(E2019203075)supported by the Natural Science Foundation of Hebei Province,China+1 种基金Project(BJ2019001)supported by the Top Young Talents Project of the Education Department of Hebei Province,ChinaProject(Kfkt2017-07)supported by the State Key Laboratory Program of High Performance Complex Manufacturing,China。
文摘The microstructure evolution of 7A85 aluminum alloy at the conditions of strain rate(0.001−1 s^(−1))and deformation temperature(250−450°C)was studied by optical microscopy(OM)and electron back scattering diffraction(EBSD).Based on the K-M dislocation density model,a two-stage K-M dislocation density model of 7A85 aluminum alloy was established.The results reveal that dynamic recovery(DRV)and dynamic recrystallization(DRX)are the main mechanisms of microstructure evolution during thermal deformation of 7A85 aluminum alloy.350−400°C is the transformation zone from dynamic recovery to dynamic recrystallization.At low temperature(≤350°C),DRV is the main mechanism,while DRX mostly occurs at high temperature(≥400°C).At this point,the sensitivity of microstructure evolution to temperature is relatively high.As the temperature increased,the average misorientation angle(θˉ_(c))increased significantly,ranging from 0.93°to 7.13°.Meanwhile,the f_(LAGBs) decreased with the highest decrease of 24%.
基金Supported by the National Natural Science Foundation of China (Nos. 40776047, 90511005)the National Basic Research Program of China (973 Project) (No. 2010CB428705)
文摘From 1997 to 2000, four field surveys were conducted in the East China Sea (ECS) (23°30'-33°00'N, 118°30'-128°00'E). A field data yield density model was used to determine the optimal salinities for 19 dominant copepod species to establish the relationship between surface salinities and abundance of those species. In addition, ecological groups of the copepods were classified based on optimal salinity and geographical distribution. The results indicate that the yield density model is suitable for determining the relationship between salinity and abundance. Cosmocalanus darwini, Euchaeta rimana, Pleuromamma gracilis, Rhincalanus cornutus, Scolecithrix danae and Pareucalanus attenuatus were determined as oceanic species, with optimal salinities of 〉34.0. They were stenohaline and mainly distributed in waters influenced by the Kuroshio or Taiwan warm current. Temora discaudata, T. stylifera and Canthocalanus pauper were nearshore species with optimal salinities of 〈33.0 and most abundant in coastal waters. The remaining 10 species, including Undinula vulgaris and Subeucalanus suberassus, were offshore species, with optimal salinity ranging from 33.0-34.0. They were widely distributed in nearshore, offshore and oceanic waters but mainly in the mixed water of the ECS.
基金Project supported by the National Natural Science Foundation of China(Nos.91441117 and51576182)
文摘Large eddy simulations(LESs) are performed to investigate the Cambridge premixed and stratified flames, SwB1 and SwB5, respectively. The flame surface density(FSD) model incorporated with two different wrinkling factor models, i.e., the Muppala and Charlette2 wrinkling factor models, is used to describe combustion/turbulence interaction, and the flamelet generated manifolds(FGM) method is employed to determine major scalars. This coupled sub-grid scale(SGS) combustion model is named as the FSD-FGM model. The FGM method can provide the detailed species in the flame which cannot be obtained from the origin FSD model. The LES results show that the FSD-FGM model has the ability of describing flame propagation, especially for stratified flames. The Charlette2 wrinkling factor model performs better than the Muppala wrinkling factor model in predicting the flame surface area change by the turbulence.The combustion characteristics are analyzed in detail by the flame index and probability distributions of the equivalence ratio and the orientation angle, which confirms that for the investigated stratified flame, the dominant combustion modes in the upstream and downstream regions are the premixed mode and the back-supported mode, respectively.
文摘For satellites in orbits,most perturbations can be well modeled;however the inaccuracy of the atmospheric density model remains the biggest error source in orbit determination and prediction.The commonly used empirical atmospheric density models,such as Jacchia,NRLMSISE,DTM,and Russian GOST,still have a relative error of about 10%-30%.Because of the uncertainty in the atmospheric density distribution,high accuracy estimation of the atmospheric density cannot be achieved using a deterministic model.A better way to improve the accuracy is to calibrate the model with updated measurements.Twoline element(TLE)sets provide accessible orbital data,which can be used in the model calibration.In this paper,an algorithm for calibrating the atmospheric density model is developed.First,the density distribution of the atmosphere is represented by a power series expansion whose coefficients are denoted by the spherical harmonic expansions.Then,the useful historical TLE data are selected.The ballistic coefficients of the objects are estimated using the BSTAR data in TLEs,and the parameterized model is calibrated by solving a nonlinear least squares problem.Simulation results show that the prediction error is reduced using the proposed calibration algorithm.
基金Funding of Jiangsu Innovation Program for Graduate Education (CXZZ11_0193)NUAA Research Funding (NJ2010009)
文摘An improved method using kernel density estimation (KDE) and confidence level is presented for model validation with small samples. Decision making is a challenging problem because of input uncertainty and only small samples can be used due to the high costs of experimental measurements. However, model validation provides more confidence for decision makers when improving prediction accuracy at the same time. The confidence level method is introduced and the optimum sample variance is determined using a new method in kernel density estimation to increase the credibility of model validation. As a numerical example, the static frame model validation challenge problem presented by Sandia National Laboratories has been chosen. The optimum bandwidth is selected in kernel density estimation in order to build the probability model based on the calibration data. The model assessment is achieved using validation and accreditation experimental data respectively based on the probability model. Finally, the target structure prediction is performed using validated model, which are consistent with the results obtained by other researchers. The results demonstrate that the method using the improved confidence level and kernel density estimation is an effective approach to solve the model validation problem with small samples.
文摘The methods of deriving FeO and TiO_(2)contents from the Clementine spacecraft data were discussed,and an approach was developed to derive the content from the measurements using the Moon Mineralogy Mapper(M3)instrument on Chandrayaan-1.The density of lunar bedrock was then modeled on the basis of the derived FeO and TiO_(2)abundances.The FeO and TiO_(2)abundances derived from the M^(3)data were compared with the previous results of the Clementine data and were in good agreement.The FeO abundance data also agreed well with the Lunar Prospector data,which were used as an independent source.The previous Clementine and newly M3 derived abundances were compared with the laboratory measured FeO and TiO2 contents in the Apollo and Luna returned samples.The Clementine derived FeO content was systematically 1%–2%lower than the laboratory measurements in all the returned samples.The M^(3)derived content agreed well with the returned Apollo samples and was within±2.8%of the laboratory measurements.The Clementine derived TiO2 abundance was systematically 0.1%–4%higher than the laboratory measurements of the returned samples.The M3 derived TiO_(2)agreed well(±0.6%)with the laboratory measurements of the returned samples,except for samples with high TiO2 content.However,these results should be carefully interpreted because the error range requires verification.No error analysis was provided with the previous Clementine derived contents.
基金Supported by the National Natural Science Foundation of China(No.11971433)the First Class Discipline of Zhejiang-A(Zhejiang Gongshang University-Statistics)the Intramural Research Program of the Eunice Kennedy Shriver National Institute of Child Health and Human Development.
文摘In this paper,we consider testing the hypothesis concerning the means of two independent semicontinuous distributions whose observations are zero-inflated,characterized by a sizable number of zeros and positive observations from a continuous distribution.The continuous parts of the two semicontinuous distributions are assumed to follow a density ratio model.A new two-part test is developed for this kind of data.The proposed test takes the sum of one test for equality of proportions of zero values and one conditional test for the continuous distribution.The test is proved to follow a2 distribution with two degrees of freedom.Simulation studies show that the proposed test controls the type I error rates at the desired level,and is competitive to,and most of the time more powerful than two popular tests.A real data example from a dietary intervention study is used to illustrate the usefulness of the proposed test.
基金supported by the National Natural Science Foundation of China(No.51390493)
文摘Turbulent gas-particle flows are studied by a kinetic description using a prob- ability density function (PDF). Unlike other investigators deriving the particle Reynolds stress equations using the PDF equations, the particle PDF transport equations are di- rectly solved either using a finite-difference method for two-dimensional (2D) problems or using a Monte-Carlo (MC) method for three-dimensional (3D) problems. The proposed differential stress model together with the PDF (DSM-PDF) is used to simulate turbulent swirling gas-particle flows. The simulation results are compared with the experimental results and the second-order moment (SOM) two-phase modeling results. All of these simulation results are in agreement with the experimental results, implying that the PDF approach validates the SOM two-phase turbulence modeling. The PDF model with the SOM-MC method is used to simulate evaporating gas-droplet flows, and the simulation results are in good agreement with the experimental results.
基金Supported by the National Natural Science Foundation of China(61374044)Shanghai Science Technology Commission(12510709400)+1 种基金Shanghai Municipal Education Commission(14ZZ088)Shanghai Talent Development Plan
文摘This paper focuses on resolving the identification problem of a neuro-fuzzy model(NFM) applied in batch processes. A hybrid learning algorithm is introduced to identify the proposed NFM with the idea of auxiliary error model and the identification principle based on the probability density function(PDF). The main contribution is that the NFM parameter updating approach is transformed into the shape control for the PDF of modeling error. More specifically, a virtual adaptive control system is constructed with the aid of the auxiliary error model and then the PDF shape control idea is used to tune NFM parameters so that the PDF of modeling error is controlled to follow a targeted PDF, which is in Gaussian or uniform distribution. Examples are used to validate the applicability of the proposed method and comparisons are made with the minimum mean square error based approaches.
基金supported by the National Natural Science Foundation of China (Grant No. 11874263)the National Key R&D Program of China (Grant No. 2017YFE0131300)Shanghai Technology Innovation Action Plan (2020-Integrated Circuit Technology Support Program 20DZ1100605,2021-Fundamental Research Area 21JC1404700)。
文摘We investigate the topological phase transition driven by non-local electronic correlations in a realistic quantum anomalous Hall model consisting of d_(xy)–d_(x^(2)-y^(2)) orbitals. Three topologically distinct phases defined in the noninteracting limit evolve to different charge density wave phases under correlations. Two conspicuous conclusions were obtained: The topological phase transition does not involve gap-closing and the dynamical fluctuations significantly suppress the charge order favored by the next nearest neighbor interaction. Our study sheds light on the stability of topological phase under electronic correlations, and we demonstrate a positive role played by dynamical fluctuations that is distinct to all previous studies on correlated topological states.
基金supported by the National Basic Research Program of China(Grant Nos.2012CB821301 and 2010CB832803)the National Natural Science Foundation of China(Grant Nos.11004222 and 91121016)the Chinese Academy of Sciences
文摘In this paper, we report a method by which the ion quantity is estimated rapidly with an accuracy of 4%. This finding is based on the low-temperature ion density theory and combined with the ion crystal size obtained from experiment with the precision of a micrometer. The method is objective, straightforward, and independent of the molecular dynamics (MD) simulation. The result can be used as the reference for the MD simulation, and the method can improve the reliability and precision of MD simulation. This method is very helpful for intensively studying ion crystal, such as phase transition, spatial configuration, temporal evolution, dynamic character, cooling efficiency, and the temperature limit of the ions.
基金This work was supported by the National Science Center,Poland(Narodowe Centrum Nauki,Polska)in the project“Statistical modeling of turbulent two-fluid flows with interfaces”(Grant No.2016/21/B/ST8/01010,ID:334165).
文摘This paper reviews differences between the deterministic(sharp and diffuse)and statistical models of the interphase region between the two-phases.In the literature this region is usually referred to as the(macroscopic)interface.Therein,the mesoscopic interface that is defined at the molecular level and agitated by the thermal fluctuations is found with nonzero probability.For this reason,in this work,the interphase region is called the mesoscopic intermittency/transition region.To this purpose,the first part of the present work gives the rationale for introduction of the mesoscopic intermittency region statistical model.It is argued that classical(deterministic)sharp and diffuse models do not explain the experimental and numerical results presented in the literature.Afterwards,it is elucidated that a statistical model of the mesoscopic intermittency region(SMIR)combines existing sharp and diffuse models into a single coherent framework and explains published experimental and numerical results.In the second part of the present paper,the SMIR is used for the first time to predict equilibrium and nonequilibrium two-phase flow in the numerical simulation.To this goal,a two-dimensional rising gas bubble is studied;obtained numerical results are used as a basis to discuss differences between the deterministic and statistical models showing the statistical description has a potential to account for the physical phenomena not previously considered in the computer simulations.
基金supported by the National Natural Science Foundation of China(No.50921002)
文摘Heavy-medium cyclones are widely used to upgrade run-of-mine coal.But the understanding of flow in a cyclone containing a dense medium is still incomplete.By introducing turbulent diffusion into calculations of centrifugal settling a theoretical distribution function giving the density field can be deduced.Qualitative analysis of the density field in every part of a cylindrical cyclone suggests an optimum design that has exhibited good separation effectiveness and anti-wear performance when in commercial operation.
基金supported by the National Key Research and Development Program of China (2016YFC0400207)the National Natural Science Foundation of China (51222905, 51621061, 51509130)+2 种基金the Natural Science Foundation of Jiangsu Province, China (BK20150908)the Discipline Innovative Engineering Plan (111 Program, B14002)the Jiangsu Key Laboratory of Agricultural Meteorology Foundation (JKLAM1601)
文摘Furrow irrigation is a traditional widely-used irrigation method in the world. Understanding the dynamics of soil water distribution is essential to developing effective furrow irrigation strategies, especially in water-limited regions. The objectives of this study are to analyze root length density distribution and to explore soil water dynamics by simulating soil water content using a HYDRUS-2D model with consideration of root water uptake for furrow irrigated tomato plants in a solar greenhouse in Northwest China. Soil water contents were also in-situ observed by the ECH_2O sensors from 4 June to 19 June and from 21 June to 4 July, 2012. Results showed that the root length density of tomato plants was concentrated in the 0–50 cm soil layers, and radiated 0–18 cm toward the furrow and 0–30 cm along the bed axis. Soil water content values simulated by the HYDRUS-2D model agreed well with those observed by the ECH_2O sensors, with regression coefficient of 0.988, coefficient of determination of 0.89, and index of agreement of 0.97. The HYDRUS-2D model with the calibrated parameters was then applied to explore the optimal irrigation scheduling. Infrequent irrigation with a large amount of water for each irrigation event could result in 10%–18% of the irrigation water losses. Thus we recommend high irrigation frequency with a low amount of water for each irrigation event in greenhouses for arid region. The maximum high irrigation amount and the suitable irrigation interval required to avoid plant water stress and drainage water were 34 mm and 6 days, respectively, for given daily average transpiration rate of 4.0 mm/d. To sum up, the HYDRUS-2D model with consideration of root water uptake can be used to improve irrigation scheduling for furrow irrigated tomato plants in greenhouses in arid regions.
基金support from the National Natural Science Foundation of China (11402276)
文摘The cavitation cloud of different internal structures results in different collapse pressures owing to the interaction among bubbles. The internal structure of cloud cavitation is required to accurately predict collapse pressure. A cavitation model was developed through dimensional analysis and direct numerical simulation of collapse of bubble cluster. Bubble number density was included in proposed model to characterize the internal structure of bubble cloud. Implemented on flows over a projectile, the proposed model predicts a higher collapse pressure compared with Singhal model. Results indicate that the collapse pressure of detached cavitation cloud is affected by bubble number density.
文摘Ni0.35Zn0.65Fe2O4 ferrite was synthesized by SHS method. In the process of SHS, combustion temperature and velocity were the main process parameters , which were decided by the Fe content, grain size of the ferrite powder, relative density and the oxygen pressure. In this paper the effects of Fe content, grain size and oxygen pressure on combustion temperature and velocity were discussed. The relation between combustion temperature and magnetic permeability was also studied and the method of polynomial regression was used to establish the mathematical model of the relation.
基金co-supported by the National Natural Science Foundation of China(No.61171127)NSF of China(No.60972024)NSTMP of China(No.2011ZX03003-001-02 and No.2012ZX03001007-003)
文摘It is understood that the forward-backward probability hypothesis density (PHD) smoothing algorithms proposed recently can significantly improve state estimation of targets. However, our analyses in this paper show that they cannot give a good cardinality (i.e., the number of targets) estimate. This is because backward smoothing ignores the effect of temporary track drop- ping caused by forward filtering and/or anomalous smoothing resulted from deaths of targets. To cope with such a problem, a novel PHD smoothing algorithm, called the variable-lag PHD smoother, in which a detection process used to identify whether the filtered cardinality varies within the smooth lag is added before backward smoothing, is developed here. The analytical results show that the proposed smoother can almost eliminate the influences of temporary track dropping and anomalous smoothing, while both the cardinality and the state estimations can significantly be improved. Simulation results on two multi-target tracking scenarios verify the effectiveness of the proposed smoother.
文摘This paper examines the forecasting performance of different kinds of GARCH model (GRACH, EGARCH, TARCH and APARCH) under the Normal, Student-t and Generalized error distributional assumption. We compare the effect of different distributional assumption on the GARCH models. The data we analyze are the daily stocks indexes for Shenzhen Stock Exchange (SSE) in China from April 3^rd, 1991 to April 14^th, 2005. We find that improvements of the overall estimation are achieved when asymmetric GARCH models are used with student-t distribution and generalized error distribution. Moreover, it is found that TARCH and GARCH models give better forecasting performance than EGARCH and APARCH models. In forecasting performance, the model under normal distribution gives more accurate forecasting performance than non-normal densities and generalized error distributions clearly outperform the student-t densities in case of SSE.
基金The National Natural Science Foundation of China (U1812401)The Science and Technology Support Plan in Guizhou Province (G[2020]4Y016)+1 种基金The 2019 Philosophy and Social Science Planning Key Topics in Guizhou Province (19GZZD07)The Guizhou Provincial Water Resources Science and Technology Funding Program (KT202108)。
文摘Land use/cover change(LUCC)is a measure that offers insights into the interaction between human activities and the natural environment,which significantly impacts the ecological environment of a region.Based on data from the period from 2000 to 2020 regarding land use,topography,climate,the economy,and population,this study investigates the spatial and temporal evolution of land use in the Liuchong River Basin,examining the inte-raction between human activities and the natural environment using the land use dynamics model,the transfer matrix model,the kernel density model,and the geodetic detector.The results indicate that:(1)The type of land cover in Liuchong River Basin primarily comprises cropland,forest,and shrubs,with the land use change mode mainly consisting of an increase in the impervious area and a decrease in surface area covered by shrubs.(2)The dynamic degree for single land use of barren,impervious,and waters indicates a significant increase,with areas covered by shrubs decreasing by 9.37%.In addition,the change in the degree of single land use for other types of cover is more stable,with the degree of comprehensive land use being 7.95%.The areas experiencing the greatest land use change in the watershed went through conditions that can be described as“sporadic distribution”to“dis-persed”to“relatively concentrated”.(3)Air temperature,rainfall,and elevation are important factors driving land use changes in the Liuchong River Basin.The impact of nighttime lighting,gross domestic product(GDP),and norma-lized difference vegetation index(NDVI)on land use change have gradually increased over time.The results of the interaction detection indicated that the explanatory power of the interaction between the driving factors in each pe-riod for land-use changes was always greater than that of any single factor.The results of this study offer evi-dence-based support and scientific references for spatial planning,soil and water conservation,and ecological restoration in a watershed.