Lead(Pb)plays a significant role in the nuclear industry and is extensively used in radiation shielding,radiation protection,neutron moderation,radiation measurements,and various other critical functions.Consequently,...Lead(Pb)plays a significant role in the nuclear industry and is extensively used in radiation shielding,radiation protection,neutron moderation,radiation measurements,and various other critical functions.Consequently,the measurement and evaluation of Pb nuclear data are highly regarded in nuclear scientific research,emphasizing its crucial role in the field.Using the time-of-flight(ToF)method,the neutron leakage spectra from three^(nat)Pb samples were measured at 60°and 120°based on the neutronics integral experimental facility at the China Institute of Atomic Energy(CIAE).The^(nat)Pb sample sizes were30 cm×30 cm×5 cm,30 cm×30 cm×10 cm,and 30 cm×30 cm×15 cm.Neutron sources were generated by the Cockcroft-Walton accelerator,producing approximately 14.5 MeV and 3.5 MeV neutrons through the T(d,n)^(4)He and D(d,n)^(3)He reactions,respectively.Leakage neutron spectra were also calculated by employing the Monte Carlo code of MCNP-4C,and the nuclear data of Pb isotopes from four libraries:CENDL-3.2,JEFF-3.3,JENDL-5,and ENDF/B-Ⅷ.0 were used individually.By comparing the simulation and experimental results,improvements and deficiencies in the evaluated nuclear data of the Pb isotopes were analyzed.Most of the calculated results were consistent with the experimental results;however,a few areas did not fit well.In the(n,el)energy range,the simulated results from CENDL-3.2 were significantly overestimated;in the(n,inl)D and the(n,inl)C energy regions,the results from CENDL-3.2 and ENDF/B-Ⅷ.0 were significantly overestimated at 120°,and the results from JENDL-5 and JEFF-3.3 are underestimated at 60°in the(n,inl)D energy region.The calculated spectra were analyzed by comparing them with the experimental spectra in terms of the neutron spectrum shape and C/E values.The results indicate that the theoretical simulations,using different data libraries,overestimated or underestimated the measured values in certain energy ranges.Secondary neutron energies and angular distributions in the data files have been presented to explain these discrepancies.展开更多
A benchmark experiment on^(238)U slab samples was conducted using a deuterium-tritium neutron source at the China Institute of Atomic Energy.The leakage neutron spectra within energy levels of 0.8-16 MeV at 60°an...A benchmark experiment on^(238)U slab samples was conducted using a deuterium-tritium neutron source at the China Institute of Atomic Energy.The leakage neutron spectra within energy levels of 0.8-16 MeV at 60°and 120°were measured using the time-of-flight method.The samples were prepared as rectangular slabs with a 30 cm square base and thicknesses of 3,6,and 9 cm.The leakage neutron spectra were also calculated using the MCNP-4C program based on the latest evaluated files of^(238)U evaluated neutron data from CENDL-3.2,ENDF/B-Ⅷ.0,JENDL-5.0,and JEFF-3.3.Based on the comparison,the deficiencies and improvements in^(238)U evaluated nuclear data were analyzed.The results showed the following.(1)The calculated results for CENDL-3.2 significantly overestimated the measurements in the energy interval of elastic scattering at 60°and 120°.(2)The calculated results of CENDL-3.2 overestimated the measurements in the energy interval of inelastic scattering at 120°.(3)The calculated results for CENDL-3.2 significantly overestimated the measurements in the 3-8.5 MeV energy interval at 60°and 120°.(4)The calculated results with JENDL-5.0 were generally consistent with the measurement results.展开更多
To ensure agreement between theoretical calculations and experimental data,parameters to selected nuclear physics models are perturbed and fine-tuned in nuclear data evaluations.This approach assumes that the chosen s...To ensure agreement between theoretical calculations and experimental data,parameters to selected nuclear physics models are perturbed and fine-tuned in nuclear data evaluations.This approach assumes that the chosen set of models accurately represents the‘true’distribution of considered observables.Furthermore,the models are chosen globally,indicating their applicability across the entire energy range of interest.However,this approach overlooks uncertainties inherent in the models themselves.In this work,we propose that instead of selecting globally a winning model set and proceeding with it as if it was the‘true’model set,we,instead,take a weighted average over multiple models within a Bayesian model averaging(BMA)framework,each weighted by its posterior probability.The method involves executing a set of TALYS calculations by randomly varying multiple nuclear physics models and their parameters to yield a vector of calculated observables.Next,computed likelihood function values at each incident energy point were then combined with the prior distributions to obtain updated posterior distributions for selected cross sections and the elastic angular distributions.As the cross sections and elastic angular distributions were updated locally on a per-energy-point basis,the approach typically results in discontinuities or“kinks”in the cross section curves,and these were addressed using spline interpolation.The proposed BMA method was applied to the evaluation of proton-induced reactions on ^(58)Ni between 1 and 100 MeV.The results demonstrated a favorable comparison with experimental data as well as with the TENDL-2023 evaluation.展开更多
In this work,we explore the use of an iterative Bayesian Monte Carlo(iBMC)method for nuclear data evaluation within a TALYS Evaluated Nuclear Data Library(TENDL)framework.The goal is to probe the model and parameter s...In this work,we explore the use of an iterative Bayesian Monte Carlo(iBMC)method for nuclear data evaluation within a TALYS Evaluated Nuclear Data Library(TENDL)framework.The goal is to probe the model and parameter space of the TALYS code system to find the optimal model and parameter sets that reproduces selected experimental data.The method involves the simultaneous variation of many nuclear reaction models as well as their parameters included in the TALYS code.The‘best’model set with its parameter set was obtained by comparing model calculations with selected experimental data.Three experimental data types were used:(1)reaction cross sections,(2)residual production cross sections,and(3)the elastic angular distributions.To improve our fit to experimental data,we update our‘best’parameter set—the file that maximizes the likelihood function—in an iterative fashion.Convergence was determined by monitoring the evolution of the maximum likelihood estimate(MLE)values and was considered reached when the relative change in the MLE for the last two iterations was within 5%.Once the final‘best’file is identified,we infer parameter uncertainties and covariance information to this file by varying model parameters around this file.In this way,we ensured that the parameter distributions are centered on our evaluation.The proposed method was applied to the evaluation of p+^(59)Co between 1 and 100 MeV.Finally,the adjusted files were compared with experimental data from the EXFOR database as well as with evaluations from the TENDL-2019,JENDL/He-2007 and JENDL-4.0/HE nuclear data libraries.展开更多
To benefit from recent advances in modeling and computational algorithms,as well as the availability of new covariance data,sensitivity and uncertainty analyses are needed to quantify the impact of uncertain sources o...To benefit from recent advances in modeling and computational algorithms,as well as the availability of new covariance data,sensitivity and uncertainty analyses are needed to quantify the impact of uncertain sources on the design parameters of small prismatic high-temperature gascooled reactors(HTGRs).In particular,the contribution of nuclear data to the k_(eff)uncertainty is an important part of the uncertainty analysis of small-sized HTGR physical calculations.In this study,a small-sized HTGR designed by China Nuclear Power Engineering Co.,Ltd.was selected for k_(eff)uncertainty analysis during full lifetime burnup calculations.Models of the cold zero power(CZP)condition and full lifetime burnup process were constructed using the Reactor Monte Carlo Code RMC for neutron transport calculation,depletion calculation,and sensitivity and uncertainty analysis.For the sensitivity analysis,the Contribution-Linked eigenvalue sensitivity/Uncertainty estimation via Track length importance Characterization(CLUTCH)method was applied to obtain sensitive information,and the "sandwich" method was used to quantify the k_(eff)uncertainty.We also compared the k_(eff)uncertainties to other typical reactors.Our results show that ^(235)U is the largest contributor to k_(eff)uncertainty for both the CZP and depletion conditions,while the contribution of ^(239)Pu is not very significant because of the design of low discharge burnup.It is worth noting that the radioactive capture reaction of ^(28)Si significantly contributes to the k_(eff)uncertainty owing to its specific fuel design.However,the k_(eff)uncertainty during the full lifetime depletion process was relatively stable,only increasing by 1.12%owing to the low discharge burnup design of small-sized HTGRs.These numerical results are beneficial for neutronics design and core parameters optimization in further uncertainty propagation and quantification study for small-sized HTGR.展开更多
Recent progress in nuclear data measurement for ADS at Institute of Modern Physics is reviewed briefly.Based on the cooler storage ring of the Heavy Ion Research Facility in Lanzhou, nuclear data terminal was establis...Recent progress in nuclear data measurement for ADS at Institute of Modern Physics is reviewed briefly.Based on the cooler storage ring of the Heavy Ion Research Facility in Lanzhou, nuclear data terminal was established.The nuclear data measurement facility for the ADS spallation target has been constructed, which provides a very important platform for the experimental measurements of spallation reactions. A number of experiments have been conducted in the nuclear data terminal. A Neutron Time-of-Flight(NTOF)spectrometer was developed for the study of neutron production from spallation reactions related to the ADS project.The experiments of 400 MeV/u ^(16)O bombarded on a tungsten target were presented using a NTOF spectrometer.Neutron yields for 250 MeV protons incident on a thick grain-made tungsten target and a thick solid lead target have been measured using the water-bath neutron activation method. Spallation residual productions were studied by bombarding W and Pb targets with a 250 MeV proton beam using the neutron activation method. Benchmarking of evaluated nuclear data libraries was performed for D-T neutrons on ADS relevant materials by using the benchmark experimental facility at the China Institute of Atomic Energy.展开更多
Accurate and reliable nuclear data libraries are essential for calculation and design of advanced nuclea systems. A 1200 fine group nuclear data library Hybrid Evaluated Nuclear Data Library/Fine Group(HENDL/FG with n...Accurate and reliable nuclear data libraries are essential for calculation and design of advanced nuclea systems. A 1200 fine group nuclear data library Hybrid Evaluated Nuclear Data Library/Fine Group(HENDL/FG with neutrons of up to 150 Me V has been developed to improve the accuracy of neutronics calculations and anal ysis. Corrections of Doppler, resonance self-shielding, and thermal upscatter effects were done for HENDL/FG Shielding and critical safety benchmarks were performed to test the accuracy and reliability of the library. The dis crepancy between calculated and measured nuclea parameters fell into a reasonable range.展开更多
New evaluations for several actinide nuclei of the third version of Chinese Evaluated Nuclear Data Library for Neutron Reaction Data (CENDL-3.1) have been completed and released. The evaluation is for all neutron in...New evaluations for several actinide nuclei of the third version of Chinese Evaluated Nuclear Data Library for Neutron Reaction Data (CENDL-3.1) have been completed and released. The evaluation is for all neutron induced reactions with uranium, neptunium, plutonium and americium in the mass range A-232-241, 236 239, 236-246 and 240-244, respectively, and cover the incident neutron energy up to 20 MeV. In the present evaluation, much more effort was devoted to improving the reliability of the evaluated nuclear data for available new measured data, especially scarce or absent experimental data. A general description for the evaluation of several actinides' data is presented.展开更多
Beginning from the proposition that availability of reliable data is necessary to the application of nuclear techniques, we explore the questions of how such data are obtained and how the extent of their reliability i...Beginning from the proposition that availability of reliable data is necessary to the application of nuclear techniques, we explore the questions of how such data are obtained and how the extent of their reliability is ascertained. These questions are considered first in general terms in relation to data types and organizational frameworks, then with particular reference to the journal Atomic Data and Nuclear Data Tables. The reliability issue is further discussed in terms of this journal’s policies and unique presentation style.展开更多
A new method of data access which can effectively resolve the problem of high speed and real time reading data of nuclear instrument in small storage space is introduced. This method applies the data storage mode of ...A new method of data access which can effectively resolve the problem of high speed and real time reading data of nuclear instrument in small storage space is introduced. This method applies the data storage mode of “linked list” to the system of Micro Control Unit (MCU), and realizes the pointer access of nuclear data on the small storage space of MCU. Experimental results show that this method can solve some problems of traditional data storage method, which has the advantages of simple program design, stable performance, accurate data, strong repeatability, saving storage space and so on.展开更多
近期提出的单体相移深度神经网络(single phase-shift deep neural network,SPDNN),因其网络规模小、学习精度高,成为首个复杂中子共振截面拟合与评价的实用深度学习工具。在SPDNN学习共振截面的过程中,诸多因素显著影响网络的训练效果...近期提出的单体相移深度神经网络(single phase-shift deep neural network,SPDNN),因其网络规模小、学习精度高,成为首个复杂中子共振截面拟合与评价的实用深度学习工具。在SPDNN学习共振截面的过程中,诸多因素显著影响网络的训练效果、训练效率以及训练模型的泛化性。这些因素包括:决定网络相移层大小的共振截面频谱范围与频段宽度、隐藏层的数目、每层神经元的数目、激活函数、损失函数、训练步数和训练数据的预处理等。为了进一步提升SPDNN在共振截面研究中的实用性,详细考察了这些因素对网络拟合性能的影响。通过考察,确定了SPDNN在共振截面研究中适宜的网络构建和训练方法,助力推动SPDNN的广泛应用。展开更多
Prompt fission neutron spectra(PFNS)have a significant role in nuclear science and technology.In this study,the PFNS for^(239)Pu are evaluated using both differential and integral experimental data.A method that lever...Prompt fission neutron spectra(PFNS)have a significant role in nuclear science and technology.In this study,the PFNS for^(239)Pu are evaluated using both differential and integral experimental data.A method that leverages integral criticality benchmark experiments to constrain the PFNS data is introduced.The measured central values of the PFNS are perturbed by constructing a covariance matrix.The PFNS are sampled using two types of covariance matrices,either generated with an assumed correlation matrix and incorporating experimental uncertainties or derived directly from experimental reports.The joint Monte Carlo transport code is employed to perform transport simulations on five criticality benchmark assemblies by utilizing perturbed PFNS data.Extensive simulations result in an optimized PFNS that shows improved agreement with the integral criticality benchmark experiments.This study introduces a novel approach for optimizing differential experimental data through integral experiments,particularly when a covariance matrix is not provided.展开更多
基金supported by the National Natural Science Foundation of China(Nos.11775311 and U2067205)the Stable Support Basic Research Program Grant(BJ010261223282)the Research and Development Project of China National Nuclear Corporation。
文摘Lead(Pb)plays a significant role in the nuclear industry and is extensively used in radiation shielding,radiation protection,neutron moderation,radiation measurements,and various other critical functions.Consequently,the measurement and evaluation of Pb nuclear data are highly regarded in nuclear scientific research,emphasizing its crucial role in the field.Using the time-of-flight(ToF)method,the neutron leakage spectra from three^(nat)Pb samples were measured at 60°and 120°based on the neutronics integral experimental facility at the China Institute of Atomic Energy(CIAE).The^(nat)Pb sample sizes were30 cm×30 cm×5 cm,30 cm×30 cm×10 cm,and 30 cm×30 cm×15 cm.Neutron sources were generated by the Cockcroft-Walton accelerator,producing approximately 14.5 MeV and 3.5 MeV neutrons through the T(d,n)^(4)He and D(d,n)^(3)He reactions,respectively.Leakage neutron spectra were also calculated by employing the Monte Carlo code of MCNP-4C,and the nuclear data of Pb isotopes from four libraries:CENDL-3.2,JEFF-3.3,JENDL-5,and ENDF/B-Ⅷ.0 were used individually.By comparing the simulation and experimental results,improvements and deficiencies in the evaluated nuclear data of the Pb isotopes were analyzed.Most of the calculated results were consistent with the experimental results;however,a few areas did not fit well.In the(n,el)energy range,the simulated results from CENDL-3.2 were significantly overestimated;in the(n,inl)D and the(n,inl)C energy regions,the results from CENDL-3.2 and ENDF/B-Ⅷ.0 were significantly overestimated at 120°,and the results from JENDL-5 and JEFF-3.3 are underestimated at 60°in the(n,inl)D energy region.The calculated spectra were analyzed by comparing them with the experimental spectra in terms of the neutron spectrum shape and C/E values.The results indicate that the theoretical simulations,using different data libraries,overestimated or underestimated the measured values in certain energy ranges.Secondary neutron energies and angular distributions in the data files have been presented to explain these discrepancies.
基金This work was supported by the general program(No.1177531)joint funding(No.U2067205)from the National Natural Science Foundation of China.
文摘A benchmark experiment on^(238)U slab samples was conducted using a deuterium-tritium neutron source at the China Institute of Atomic Energy.The leakage neutron spectra within energy levels of 0.8-16 MeV at 60°and 120°were measured using the time-of-flight method.The samples were prepared as rectangular slabs with a 30 cm square base and thicknesses of 3,6,and 9 cm.The leakage neutron spectra were also calculated using the MCNP-4C program based on the latest evaluated files of^(238)U evaluated neutron data from CENDL-3.2,ENDF/B-Ⅷ.0,JENDL-5.0,and JEFF-3.3.Based on the comparison,the deficiencies and improvements in^(238)U evaluated nuclear data were analyzed.The results showed the following.(1)The calculated results for CENDL-3.2 significantly overestimated the measurements in the energy interval of elastic scattering at 60°and 120°.(2)The calculated results of CENDL-3.2 overestimated the measurements in the energy interval of inelastic scattering at 120°.(3)The calculated results for CENDL-3.2 significantly overestimated the measurements in the 3-8.5 MeV energy interval at 60°and 120°.(4)The calculated results with JENDL-5.0 were generally consistent with the measurement results.
基金funding from the Paul ScherrerInstitute,Switzerland through the NES/GFA-ABE Cross Project。
文摘To ensure agreement between theoretical calculations and experimental data,parameters to selected nuclear physics models are perturbed and fine-tuned in nuclear data evaluations.This approach assumes that the chosen set of models accurately represents the‘true’distribution of considered observables.Furthermore,the models are chosen globally,indicating their applicability across the entire energy range of interest.However,this approach overlooks uncertainties inherent in the models themselves.In this work,we propose that instead of selecting globally a winning model set and proceeding with it as if it was the‘true’model set,we,instead,take a weighted average over multiple models within a Bayesian model averaging(BMA)framework,each weighted by its posterior probability.The method involves executing a set of TALYS calculations by randomly varying multiple nuclear physics models and their parameters to yield a vector of calculated observables.Next,computed likelihood function values at each incident energy point were then combined with the prior distributions to obtain updated posterior distributions for selected cross sections and the elastic angular distributions.As the cross sections and elastic angular distributions were updated locally on a per-energy-point basis,the approach typically results in discontinuities or“kinks”in the cross section curves,and these were addressed using spline interpolation.The proposed BMA method was applied to the evaluation of proton-induced reactions on ^(58)Ni between 1 and 100 MeV.The results demonstrated a favorable comparison with experimental data as well as with the TENDL-2023 evaluation.
基金Funding Open Access funding provided by Lib4RI–Library for the Research Institutes within the ETH Domain:Eawag,Empa,PSI&WSLthe Paul Scherrer Institute through the NES/GFA-ABE Cross Project.
文摘In this work,we explore the use of an iterative Bayesian Monte Carlo(iBMC)method for nuclear data evaluation within a TALYS Evaluated Nuclear Data Library(TENDL)framework.The goal is to probe the model and parameter space of the TALYS code system to find the optimal model and parameter sets that reproduces selected experimental data.The method involves the simultaneous variation of many nuclear reaction models as well as their parameters included in the TALYS code.The‘best’model set with its parameter set was obtained by comparing model calculations with selected experimental data.Three experimental data types were used:(1)reaction cross sections,(2)residual production cross sections,and(3)the elastic angular distributions.To improve our fit to experimental data,we update our‘best’parameter set—the file that maximizes the likelihood function—in an iterative fashion.Convergence was determined by monitoring the evolution of the maximum likelihood estimate(MLE)values and was considered reached when the relative change in the MLE for the last two iterations was within 5%.Once the final‘best’file is identified,we infer parameter uncertainties and covariance information to this file by varying model parameters around this file.In this way,we ensured that the parameter distributions are centered on our evaluation.The proposed method was applied to the evaluation of p+^(59)Co between 1 and 100 MeV.Finally,the adjusted files were compared with experimental data from the EXFOR database as well as with evaluations from the TENDL-2019,JENDL/He-2007 and JENDL-4.0/HE nuclear data libraries.
基金supported by the National Natural Science Foundation of China(No.12075067)the National Key R&D Program of China(No.2018YFE0180900)。
文摘To benefit from recent advances in modeling and computational algorithms,as well as the availability of new covariance data,sensitivity and uncertainty analyses are needed to quantify the impact of uncertain sources on the design parameters of small prismatic high-temperature gascooled reactors(HTGRs).In particular,the contribution of nuclear data to the k_(eff)uncertainty is an important part of the uncertainty analysis of small-sized HTGR physical calculations.In this study,a small-sized HTGR designed by China Nuclear Power Engineering Co.,Ltd.was selected for k_(eff)uncertainty analysis during full lifetime burnup calculations.Models of the cold zero power(CZP)condition and full lifetime burnup process were constructed using the Reactor Monte Carlo Code RMC for neutron transport calculation,depletion calculation,and sensitivity and uncertainty analysis.For the sensitivity analysis,the Contribution-Linked eigenvalue sensitivity/Uncertainty estimation via Track length importance Characterization(CLUTCH)method was applied to obtain sensitive information,and the "sandwich" method was used to quantify the k_(eff)uncertainty.We also compared the k_(eff)uncertainties to other typical reactors.Our results show that ^(235)U is the largest contributor to k_(eff)uncertainty for both the CZP and depletion conditions,while the contribution of ^(239)Pu is not very significant because of the design of low discharge burnup.It is worth noting that the radioactive capture reaction of ^(28)Si significantly contributes to the k_(eff)uncertainty owing to its specific fuel design.However,the k_(eff)uncertainty during the full lifetime depletion process was relatively stable,only increasing by 1.12%owing to the low discharge burnup design of small-sized HTGRs.These numerical results are beneficial for neutronics design and core parameters optimization in further uncertainty propagation and quantification study for small-sized HTGR.
基金supported by the Strategic Priority Research Program of the Chinese Academy of Sciences ADS Project(No.XDA03030200)the National Natural Science Foundation of China(No.91426301)
文摘Recent progress in nuclear data measurement for ADS at Institute of Modern Physics is reviewed briefly.Based on the cooler storage ring of the Heavy Ion Research Facility in Lanzhou, nuclear data terminal was established.The nuclear data measurement facility for the ADS spallation target has been constructed, which provides a very important platform for the experimental measurements of spallation reactions. A number of experiments have been conducted in the nuclear data terminal. A Neutron Time-of-Flight(NTOF)spectrometer was developed for the study of neutron production from spallation reactions related to the ADS project.The experiments of 400 MeV/u ^(16)O bombarded on a tungsten target were presented using a NTOF spectrometer.Neutron yields for 250 MeV protons incident on a thick grain-made tungsten target and a thick solid lead target have been measured using the water-bath neutron activation method. Spallation residual productions were studied by bombarding W and Pb targets with a 250 MeV proton beam using the neutron activation method. Benchmarking of evaluated nuclear data libraries was performed for D-T neutrons on ADS relevant materials by using the benchmark experimental facility at the China Institute of Atomic Energy.
基金supported by the Natural Science Foundation of China(Nos.11405204 11305205 and 10675123)Special Program for Informatization of Chinese Academy of Sciences(No.XXH12504-1-09)the National Special Program for ITER(No.2014GB1120001)
文摘Accurate and reliable nuclear data libraries are essential for calculation and design of advanced nuclea systems. A 1200 fine group nuclear data library Hybrid Evaluated Nuclear Data Library/Fine Group(HENDL/FG with neutrons of up to 150 Me V has been developed to improve the accuracy of neutronics calculations and anal ysis. Corrections of Doppler, resonance self-shielding, and thermal upscatter effects were done for HENDL/FG Shielding and critical safety benchmarks were performed to test the accuracy and reliability of the library. The dis crepancy between calculated and measured nuclea parameters fell into a reasonable range.
文摘New evaluations for several actinide nuclei of the third version of Chinese Evaluated Nuclear Data Library for Neutron Reaction Data (CENDL-3.1) have been completed and released. The evaluation is for all neutron induced reactions with uranium, neptunium, plutonium and americium in the mass range A-232-241, 236 239, 236-246 and 240-244, respectively, and cover the incident neutron energy up to 20 MeV. In the present evaluation, much more effort was devoted to improving the reliability of the evaluated nuclear data for available new measured data, especially scarce or absent experimental data. A general description for the evaluation of several actinides' data is presented.
文摘Beginning from the proposition that availability of reliable data is necessary to the application of nuclear techniques, we explore the questions of how such data are obtained and how the extent of their reliability is ascertained. These questions are considered first in general terms in relation to data types and organizational frameworks, then with particular reference to the journal Atomic Data and Nuclear Data Tables. The reliability issue is further discussed in terms of this journal’s policies and unique presentation style.
文摘A new method of data access which can effectively resolve the problem of high speed and real time reading data of nuclear instrument in small storage space is introduced. This method applies the data storage mode of “linked list” to the system of Micro Control Unit (MCU), and realizes the pointer access of nuclear data on the small storage space of MCU. Experimental results show that this method can solve some problems of traditional data storage method, which has the advantages of simple program design, stable performance, accurate data, strong repeatability, saving storage space and so on.
文摘近期提出的单体相移深度神经网络(single phase-shift deep neural network,SPDNN),因其网络规模小、学习精度高,成为首个复杂中子共振截面拟合与评价的实用深度学习工具。在SPDNN学习共振截面的过程中,诸多因素显著影响网络的训练效果、训练效率以及训练模型的泛化性。这些因素包括:决定网络相移层大小的共振截面频谱范围与频段宽度、隐藏层的数目、每层神经元的数目、激活函数、损失函数、训练步数和训练数据的预处理等。为了进一步提升SPDNN在共振截面研究中的实用性,详细考察了这些因素对网络拟合性能的影响。通过考察,确定了SPDNN在共振截面研究中适宜的网络构建和训练方法,助力推动SPDNN的广泛应用。
基金supported by the National Natural Science Foundation of China(No.12347126)。
文摘Prompt fission neutron spectra(PFNS)have a significant role in nuclear science and technology.In this study,the PFNS for^(239)Pu are evaluated using both differential and integral experimental data.A method that leverages integral criticality benchmark experiments to constrain the PFNS data is introduced.The measured central values of the PFNS are perturbed by constructing a covariance matrix.The PFNS are sampled using two types of covariance matrices,either generated with an assumed correlation matrix and incorporating experimental uncertainties or derived directly from experimental reports.The joint Monte Carlo transport code is employed to perform transport simulations on five criticality benchmark assemblies by utilizing perturbed PFNS data.Extensive simulations result in an optimized PFNS that shows improved agreement with the integral criticality benchmark experiments.This study introduces a novel approach for optimizing differential experimental data through integral experiments,particularly when a covariance matrix is not provided.