To ensure agreement between theoretical calculations and experimental data,parameters to selected nuclear physics models are perturbed and fine-tuned in nuclear data evaluations.This approach assumes that the chosen s...To ensure agreement between theoretical calculations and experimental data,parameters to selected nuclear physics models are perturbed and fine-tuned in nuclear data evaluations.This approach assumes that the chosen set of models accurately represents the‘true’distribution of considered observables.Furthermore,the models are chosen globally,indicating their applicability across the entire energy range of interest.However,this approach overlooks uncertainties inherent in the models themselves.In this work,we propose that instead of selecting globally a winning model set and proceeding with it as if it was the‘true’model set,we,instead,take a weighted average over multiple models within a Bayesian model averaging(BMA)framework,each weighted by its posterior probability.The method involves executing a set of TALYS calculations by randomly varying multiple nuclear physics models and their parameters to yield a vector of calculated observables.Next,computed likelihood function values at each incident energy point were then combined with the prior distributions to obtain updated posterior distributions for selected cross sections and the elastic angular distributions.As the cross sections and elastic angular distributions were updated locally on a per-energy-point basis,the approach typically results in discontinuities or“kinks”in the cross section curves,and these were addressed using spline interpolation.The proposed BMA method was applied to the evaluation of proton-induced reactions on ^(58)Ni between 1 and 100 MeV.The results demonstrated a favorable comparison with experimental data as well as with the TENDL-2023 evaluation.展开更多
A number of spectroscopic surveys have been carried out or are planned to study the origin of the Milky Way. Their exploitation requires reliable automated methods and softwares to measure the fundamental parameters o...A number of spectroscopic surveys have been carried out or are planned to study the origin of the Milky Way. Their exploitation requires reliable automated methods and softwares to measure the fundamental parameters of the stars. Adopting the ULySS package, we have tested the effect of different resolutions and signal-to- noise ratios (SNR) on the measurement of the stellar atmospheric parameters (effective temperature Teff, surface gravity log g, and metaUicity [Fe/H]). We show that ULySS is reliable for determining these parameters with medium-resolution spectra (R ~2000). Then, we applied the method to measure the parameters of 771 stars selected in the commissioning database of the Guoshoujing Telescope (LAMOST). The results were compared with the SDSS/SEGUE Stellar Parameter Pipeline (SSPP), and we derived precisions of 167 K, 0.34dex, and 0.16dex for Teff, logg and [Fe/H] respectively. Furthermore, 120 of these stars are selected to construct the primary stellar spectral template library (Version 1.0) of LAMOST, and will be deployed as basic ingredients for the LAMOST automated parametrization pipeline.展开更多
Effective extraction of data association rules can provide a reliable basis for classification of stellar spectra. The concept of stellar spectrum weighted itemsets and stellar spectrum weighted association rules are ...Effective extraction of data association rules can provide a reliable basis for classification of stellar spectra. The concept of stellar spectrum weighted itemsets and stellar spectrum weighted association rules are introduced, and the weight of a single property in the stellar spectrum is determined by information entropy. On that basis, a method is presented to mine the association rules of a stellar spectrum based on the weighted frequent pattern tree. Important properties of the spectral line are highlighted using this method. At the same time, the waveform of the whole spectrum is taken into account. The experimental results show that the data association rules of a stellar spectrum mined with this method are consistent with the main features of stellar spectral types.展开更多
The Large Sky Area Multi-Object Fiber Spectroscopic Telescope (LAMOST) published its first data release (DR1) in 2013, which is currently the largest dataset of stellar spectra in the world. We combine the PASTEL ...The Large Sky Area Multi-Object Fiber Spectroscopic Telescope (LAMOST) published its first data release (DR1) in 2013, which is currently the largest dataset of stellar spectra in the world. We combine the PASTEL catalog and SIMBAD radial velocities as a testing standard to validate stellar parameters (effec- tive temperature Tefr, surface gravity log g, metallicity [Fe/H] and radial velocity Vr) derived from DR1. Through cross-identification of the DR1 catalogs and the PASTEL catalog, we obtain a preliminary sample of 422 stars. After removal of stellar param- eter measurements from problematic spectra and applying effective temperature con- straints to the sample, we compare the stellar parameters from DR1 with those from PASTEL and SIMBAD to demonstrate that the DR1 results are reliable in restricted ranges of Tefr. We derive standard deviations of 110 K, 0.19 dex and 0.11 dex for Tell, log 9 and [Fe/H] respectively when Teff〈 8000 K, and 4.91 km s-1 for Vr when Teff 〈 10 000 K. Systematic errors are negligible except for those of Vr. In addition, metallicities in DR1 are systematically higher than those in PASTEL, in the range of PASTEL [Fe/H] 〈 -1.5.展开更多
In this paper, sampled-data based average-consensus control is considered for networks consisting of continuous-time first-order integrator agents in a noisy distributed communication environment. The impact of the sa...In this paper, sampled-data based average-consensus control is considered for networks consisting of continuous-time first-order integrator agents in a noisy distributed communication environment. The impact of the sampling size and the number of network nodes on the system performances is analyzed. The control input of each agent can only use information measured at the sampling instants from its neighborhood rather than the complete continuous process, and the measurements of its neighbors' states are corrupted by random noises. By probability limit theory and the property of graph Laplacian matrix, it is shown that for a connected network, the static mean square error between the individual state and the average of the initial states of all agents can be made arbitrarily small, provided the sampling size is sufficiently small. Furthermore, by properly choosing the consensus gains, almost sure consensus can be achieved. It is worth pointing out that an uncertainty principle of Gaussian networks is obtained, which implies that in the case of white Gaussian noises, no matter what the sampling size is, the product of the steady-state and transient performance indices is always equal to or larger than a constant depending on the noise intensity, network topology and the number of network nodes.展开更多
This study evaluated the application of the European flood forecasting operational real time system (EFFORTS) to the Yellow River. An automatic data pre-processing program was developed to provide real-time hydromet...This study evaluated the application of the European flood forecasting operational real time system (EFFORTS) to the Yellow River. An automatic data pre-processing program was developed to provide real-time hydrometeorological data. Various GIS layers were collected and developed to meet the demands of the distributed hydrological model in the EFFORTS. The model parameters were calibrated and validated based on more than ten years of historical hydrometeorological data from the study area. The San-Hua Basin (from the Sanmenxia Reservoir to the Huayuankou Hydrological Station), the most geographically important area of the Yellow River, was chosen as the study area. The analysis indicates that the EFFORTS enhances the work efficiency, extends the flood forecasting lead time, and attains an acceptable level of forecasting accuracy in the San-Hua Basin, with a mean deterministic coefficient at Huayuankou Station, the basin outlet, of 0.90 in calibration and 0.96 in validation. The analysis also shows that the ;simulation accuracy is better for the southern part than for the northern part of the San-Hua Basin. This implies that, along with the characteristics of the basin and the mechanisms of runoff generation of the hydrological model, the hydrometeorological data play an important role in simulation of hydrological behavior.展开更多
探究土地利用演变及其对碳储量的影响,对于减缓都市圈气候变化、促进绿色低碳发展具有重要意义。该研究在“双碳”目标背景下,结合兴趣点(point of interest,POI)数据并顾及斑块生成土地利用模拟模型(patch-generating land use simulat...探究土地利用演变及其对碳储量的影响,对于减缓都市圈气候变化、促进绿色低碳发展具有重要意义。该研究在“双碳”目标背景下,结合兴趣点(point of interest,POI)数据并顾及斑块生成土地利用模拟模型(patch-generating land use simulation model,PLUS)进行双约束转移矩阵优化,耦合生态系统服务与权衡的综合评估(integrated valuation of ecosystem services and trade-offs,InVEST)模型分析山东省济南都市圈2000—2020年土地利用演变规律及其对生态系统碳储量的影响,模拟预测了自然发展、城镇发展和生态保护3种情景下济南都市圈2030年和2060年土地利用变化并估算其生态系统碳储量,分析其碳储量重心迁移情况,并利用参数最优地理探测器探究碳储量空间分异驱动因素。结果表明:①2000—2020年,济南都市圈耕地、草地和未利用地面积持续减少,林地面积呈波动增加状态,水域、建设用地面积增长迅速;②2000—2020年,济南都市圈碳储量及土地利用空间格局相似,以黄河主脉为分界线,呈现“东南高,西北低”的分布特征,耕地类型碳储量为研究区碳储量的主要来源,占总碳储量的80%以上;③多情景模拟下的碳储量均有所降低,主要原因为高碳密度区域耕地转换为低碳密度区域建设用地,其中生态保护情景碳储量最高,2030年总碳储量为4226.86×10^(6) t,2060年总碳储量为3967.94×10^(6) t;④不同发展时期和情景下的济南都市圈碳储量重心均发生一定偏移,发展趋势受土地利用变化影响,重心地带一直处于山东大学历城区,说明济南都市圈发展较为全面均衡;⑤各驱动因子对济南都市圈碳储量空间分布具有明显影响,其中人口密度对碳储量空间分异解释力最大,交互作用下各因子均呈现对碳储量解释力增强的结果。展开更多
Sampled-data (SD) based linear quadratic (LQ) control problem of stochastic linear continuous-time (LCT) systems is discussed. Two types of systems are involved. One is time-invariant and the other is time-varying. In...Sampled-data (SD) based linear quadratic (LQ) control problem of stochastic linear continuous-time (LCT) systems is discussed. Two types of systems are involved. One is time-invariant and the other is time-varying. In addition to stability analysis of the closed-loop systems, the index difference between SD-based LQ control and conventional LQ control is investigated. It is shown that when sample time ?T is small, so is the index difference. In addition, the upper bounds of the differences are also presented, which are O(?T2) and O(?T), respectively.展开更多
基金funding from the Paul ScherrerInstitute,Switzerland through the NES/GFA-ABE Cross Project。
文摘To ensure agreement between theoretical calculations and experimental data,parameters to selected nuclear physics models are perturbed and fine-tuned in nuclear data evaluations.This approach assumes that the chosen set of models accurately represents the‘true’distribution of considered observables.Furthermore,the models are chosen globally,indicating their applicability across the entire energy range of interest.However,this approach overlooks uncertainties inherent in the models themselves.In this work,we propose that instead of selecting globally a winning model set and proceeding with it as if it was the‘true’model set,we,instead,take a weighted average over multiple models within a Bayesian model averaging(BMA)framework,each weighted by its posterior probability.The method involves executing a set of TALYS calculations by randomly varying multiple nuclear physics models and their parameters to yield a vector of calculated observables.Next,computed likelihood function values at each incident energy point were then combined with the prior distributions to obtain updated posterior distributions for selected cross sections and the elastic angular distributions.As the cross sections and elastic angular distributions were updated locally on a per-energy-point basis,the approach typically results in discontinuities or“kinks”in the cross section curves,and these were addressed using spline interpolation.The proposed BMA method was applied to the evaluation of proton-induced reactions on ^(58)Ni between 1 and 100 MeV.The results demonstrated a favorable comparison with experimental data as well as with the TENDL-2023 evaluation.
基金Supported by the National Natural Science Foundation of China(Grant Nos. 10973021, 10778626 and 10933001)the National Basic Research Development Program of China (Grant No. 2007CB815404)the China Scholarship Council (CSC) (Grant No. 2007104275)
文摘A number of spectroscopic surveys have been carried out or are planned to study the origin of the Milky Way. Their exploitation requires reliable automated methods and softwares to measure the fundamental parameters of the stars. Adopting the ULySS package, we have tested the effect of different resolutions and signal-to- noise ratios (SNR) on the measurement of the stellar atmospheric parameters (effective temperature Teff, surface gravity log g, and metaUicity [Fe/H]). We show that ULySS is reliable for determining these parameters with medium-resolution spectra (R ~2000). Then, we applied the method to measure the parameters of 771 stars selected in the commissioning database of the Guoshoujing Telescope (LAMOST). The results were compared with the SDSS/SEGUE Stellar Parameter Pipeline (SSPP), and we derived precisions of 167 K, 0.34dex, and 0.16dex for Teff, logg and [Fe/H] respectively. Furthermore, 120 of these stars are selected to construct the primary stellar spectral template library (Version 1.0) of LAMOST, and will be deployed as basic ingredients for the LAMOST automated parametrization pipeline.
基金supported by the National Natural Science Foundation of China (Grant Nos. 61073145, 41140027 and 41210104028)the Shanxi Province Natural Science Foundation (No. 2012011011-4)+1 种基金Scientific and Technological Innovation Programs of Higher Education Institutions in Shanxi, China (No. 20121011)the Shanxi Province Science Foundation for Youths (No. 2012021015-4)
文摘Effective extraction of data association rules can provide a reliable basis for classification of stellar spectra. The concept of stellar spectrum weighted itemsets and stellar spectrum weighted association rules are introduced, and the weight of a single property in the stellar spectrum is determined by information entropy. On that basis, a method is presented to mine the association rules of a stellar spectrum based on the weighted frequent pattern tree. Important properties of the spectral line are highlighted using this method. At the same time, the waveform of the whole spectrum is taken into account. The experimental results show that the data association rules of a stellar spectrum mined with this method are consistent with the main features of stellar spectral types.
基金supported by the National Key Basic Research Program of China (NKBRP) 2014CB845700supported by National Natural Science Foundation of China (Grant Nos.11473001 and 11233004)
文摘The Large Sky Area Multi-Object Fiber Spectroscopic Telescope (LAMOST) published its first data release (DR1) in 2013, which is currently the largest dataset of stellar spectra in the world. We combine the PASTEL catalog and SIMBAD radial velocities as a testing standard to validate stellar parameters (effec- tive temperature Tefr, surface gravity log g, metallicity [Fe/H] and radial velocity Vr) derived from DR1. Through cross-identification of the DR1 catalogs and the PASTEL catalog, we obtain a preliminary sample of 422 stars. After removal of stellar param- eter measurements from problematic spectra and applying effective temperature con- straints to the sample, we compare the stellar parameters from DR1 with those from PASTEL and SIMBAD to demonstrate that the DR1 results are reliable in restricted ranges of Tefr. We derive standard deviations of 110 K, 0.19 dex and 0.11 dex for Tell, log 9 and [Fe/H] respectively when Teff〈 8000 K, and 4.91 km s-1 for Vr when Teff 〈 10 000 K. Systematic errors are negligible except for those of Vr. In addition, metallicities in DR1 are systematically higher than those in PASTEL, in the range of PASTEL [Fe/H] 〈 -1.5.
基金Supported by Singapore Millennium Foundationthe National Natural Science Foundation of China (Grant Nos. 60821091, 60674308)
文摘In this paper, sampled-data based average-consensus control is considered for networks consisting of continuous-time first-order integrator agents in a noisy distributed communication environment. The impact of the sampling size and the number of network nodes on the system performances is analyzed. The control input of each agent can only use information measured at the sampling instants from its neighborhood rather than the complete continuous process, and the measurements of its neighbors' states are corrupted by random noises. By probability limit theory and the property of graph Laplacian matrix, it is shown that for a connected network, the static mean square error between the individual state and the average of the initial states of all agents can be made arbitrarily small, provided the sampling size is sufficiently small. Furthermore, by properly choosing the consensus gains, almost sure consensus can be achieved. It is worth pointing out that an uncertainty principle of Gaussian networks is obtained, which implies that in the case of white Gaussian noises, no matter what the sampling size is, the product of the steady-state and transient performance indices is always equal to or larger than a constant depending on the noise intensity, network topology and the number of network nodes.
基金supported by the ADB Loan for Flood Management Project in the Yellow River Basin (Grant No. YH-SW-XH-02)
文摘This study evaluated the application of the European flood forecasting operational real time system (EFFORTS) to the Yellow River. An automatic data pre-processing program was developed to provide real-time hydrometeorological data. Various GIS layers were collected and developed to meet the demands of the distributed hydrological model in the EFFORTS. The model parameters were calibrated and validated based on more than ten years of historical hydrometeorological data from the study area. The San-Hua Basin (from the Sanmenxia Reservoir to the Huayuankou Hydrological Station), the most geographically important area of the Yellow River, was chosen as the study area. The analysis indicates that the EFFORTS enhances the work efficiency, extends the flood forecasting lead time, and attains an acceptable level of forecasting accuracy in the San-Hua Basin, with a mean deterministic coefficient at Huayuankou Station, the basin outlet, of 0.90 in calibration and 0.96 in validation. The analysis also shows that the ;simulation accuracy is better for the southern part than for the northern part of the San-Hua Basin. This implies that, along with the characteristics of the basin and the mechanisms of runoff generation of the hydrological model, the hydrometeorological data play an important role in simulation of hydrological behavior.
文摘探究土地利用演变及其对碳储量的影响,对于减缓都市圈气候变化、促进绿色低碳发展具有重要意义。该研究在“双碳”目标背景下,结合兴趣点(point of interest,POI)数据并顾及斑块生成土地利用模拟模型(patch-generating land use simulation model,PLUS)进行双约束转移矩阵优化,耦合生态系统服务与权衡的综合评估(integrated valuation of ecosystem services and trade-offs,InVEST)模型分析山东省济南都市圈2000—2020年土地利用演变规律及其对生态系统碳储量的影响,模拟预测了自然发展、城镇发展和生态保护3种情景下济南都市圈2030年和2060年土地利用变化并估算其生态系统碳储量,分析其碳储量重心迁移情况,并利用参数最优地理探测器探究碳储量空间分异驱动因素。结果表明:①2000—2020年,济南都市圈耕地、草地和未利用地面积持续减少,林地面积呈波动增加状态,水域、建设用地面积增长迅速;②2000—2020年,济南都市圈碳储量及土地利用空间格局相似,以黄河主脉为分界线,呈现“东南高,西北低”的分布特征,耕地类型碳储量为研究区碳储量的主要来源,占总碳储量的80%以上;③多情景模拟下的碳储量均有所降低,主要原因为高碳密度区域耕地转换为低碳密度区域建设用地,其中生态保护情景碳储量最高,2030年总碳储量为4226.86×10^(6) t,2060年总碳储量为3967.94×10^(6) t;④不同发展时期和情景下的济南都市圈碳储量重心均发生一定偏移,发展趋势受土地利用变化影响,重心地带一直处于山东大学历城区,说明济南都市圈发展较为全面均衡;⑤各驱动因子对济南都市圈碳储量空间分布具有明显影响,其中人口密度对碳储量空间分异解释力最大,交互作用下各因子均呈现对碳储量解释力增强的结果。
基金This work was supported by the National Natural Science Foundation of China.
文摘Sampled-data (SD) based linear quadratic (LQ) control problem of stochastic linear continuous-time (LCT) systems is discussed. Two types of systems are involved. One is time-invariant and the other is time-varying. In addition to stability analysis of the closed-loop systems, the index difference between SD-based LQ control and conventional LQ control is investigated. It is shown that when sample time ?T is small, so is the index difference. In addition, the upper bounds of the differences are also presented, which are O(?T2) and O(?T), respectively.