In recent decades,control performance monitoring(CPM)has experienced remarkable progress in research and industrial applications.While CPM research has been investigated using various benchmarks,the historical data be...In recent decades,control performance monitoring(CPM)has experienced remarkable progress in research and industrial applications.While CPM research has been investigated using various benchmarks,the historical data benchmark(HIS)has garnered the most attention due to its practicality and effectiveness.However,existing CPM reviews usually focus on the theoretical benchmark,and there is a lack of an in-depth review that thoroughly explores HIS-based methods.In this article,a comprehensive overview of HIS-based CPM is provided.First,we provide a novel static-dynamic perspective on data-level manifestations of control performance underlying typical controller capacities including regulation and servo:static and dynamic properties.The static property portrays time-independent variability in system output,and the dynamic property describes temporal behavior driven by closed-loop feedback.Accordingly,existing HIS-based CPM approaches and their intrinsic motivations are classified and analyzed from these two perspectives.Specifically,two mainstream solutions for CPM methods are summarized,including static analysis and dynamic analysis,which match data-driven techniques with actual controlling behavior.Furthermore,this paper also points out various opportunities and challenges faced in CPM for modern industry and provides promising directions in the context of artificial intelligence for inspiring future research.展开更多
HENDL2.0, the latest version of the hybrid evaluated nuclear data library, was developed based upon some evaluated data from FENDL2.1 and ENDF/B-VII. To qualify and validate the working library, an integral test for t...HENDL2.0, the latest version of the hybrid evaluated nuclear data library, was developed based upon some evaluated data from FENDL2.1 and ENDF/B-VII. To qualify and validate the working library, an integral test for the neutron production data of HENDL2.0 was performed with a series of existing spherical shell benchmark experiments (such as V, Be, Fe, Pb, Cr, Mn, Cu, Al, Si, Co, Zr, Nb, Mo, W and Ti). These experiments were simulated numerically using HENDL2.0/MG and a home-developed code VisualBUS. Calculations were conducted with both FENDL2.1/MG and FENDL2.1/MC, which are based on a continuous-energy Monte Carlo Code MCNP/4C. By comparison and analysis of the neutron leakage spectra and the integral test, benchmark results of neutron production data are presented in this paper.展开更多
At present, big data is very popular, because it has proved to be much successful in many fields such as social media, E-commerce transactions, etc. Big data describes the tools and technologies needed to capture, man...At present, big data is very popular, because it has proved to be much successful in many fields such as social media, E-commerce transactions, etc. Big data describes the tools and technologies needed to capture, manage, store, distribute, and analyze petabyte or larger-sized datasets having different structures with high speed. Big data can be structured, unstructured, or semi structured. Hadoop is an open source framework that is used to process large amounts of data in an inexpensive and efficient way, and job scheduling is a key factor for achieving high performance in big data processing. This paper gives an overview of big data and highlights the problems and challenges in big data. It then highlights Hadoop Distributed File System (HDFS), Hadoop MapReduce, and various parameters that affect the performance of job scheduling algorithms in big data such as Job Tracker, Task Tracker, Name Node, Data Node, etc. The primary purpose of this paper is to present a comparative study of job scheduling algorithms along with their experimental results in Hadoop environment. In addition, this paper describes the advantages, disadvantages, features, and drawbacks of various Hadoop job schedulers such as FIFO, Fair, capacity, Deadline Constraints, Delay, LATE, Resource Aware, etc, and provides a comparative study among these schedulers.展开更多
HENDL1.0/MG, a multi-group working library of the Hybrid Evaluated NuclearData Library, was home-developed by the FDS Team of ASIPP (Institute of Plasma Physics,Chinese Academy of Sciences) on the basis of several nat...HENDL1.0/MG, a multi-group working library of the Hybrid Evaluated NuclearData Library, was home-developed by the FDS Team of ASIPP (Institute of Plasma Physics,Chinese Academy of Sciences) on the basis of several national data libraries. To validate andqualify the process of producing HENDL1.0/MG, simulating calculations of a series of existentspherical shell benchmark experiments (Al, Mo, Co, Ti, Mn, W, Be and V) have been performedwith HENDL1.0/MG and the multifunctional neutronics code system named VisualBUS home-developed also by FDS Team.展开更多
Semantic segmentation in the context of 3D point clouds for the railway environment holds a significant economic value,but its development is severely hindered by the lack of suitable and specific datasets.Additionall...Semantic segmentation in the context of 3D point clouds for the railway environment holds a significant economic value,but its development is severely hindered by the lack of suitable and specific datasets.Additionally,the models trained on existing urban road point cloud datasets demonstrate poor generalisation on railway data due to a large domain gap caused by non-overlapping special/rare categories,for example,rail track,track bed etc.To harness the potential of supervised learning methods in the domain of 3D railway semantic segmentation,we introduce RailPC,a new point cloud benchmark.RailPC provides a large-scale dataset with rich annotations for semantic segmentation in the railway environment.Notably,RailPC contains twice the number of annotated points compared to the largest available mobile laser scanning(MLS)point cloud dataset and is the first railway-specific 3D dataset for semantic segmentation.It covers a total of nearly 25 km railway in two different scenes(urban and mountain),with 3 billion points that are finely labelled as 16 most typical classes with respect to railway,and the data acquisition process is completed in China by MLS systems.Through extensive experimentation,we evaluate the performance of advanced scene understanding methods on the annotated dataset and present a synthetic analysis of semantic segmentation results.Based on our findings,we establish some critical challenges towards railway-scale point cloud semantic segmentation.The dataset is available at https://github.com/NNU-GISA/GISA-RailPC,and we will continuously update it based on community feedback.展开更多
The multi-group working nuclear data library HENDL1.0/MG is numerically tested with a series of existent benchmark spherical shell experiments (Si, Cr, Fe, Cu, Zr and Nb) by calculations using the multi-functional neu...The multi-group working nuclear data library HENDL1.0/MG is numerically tested with a series of existent benchmark spherical shell experiments (Si, Cr, Fe, Cu, Zr and Nb) by calculations using the multi-functional neutronics code VisualBUS. The ratio of calculated/measured neutron leakage rates and the neutron leakage spectra are presented in tabular and figural forms. The results from the calculations with the code ANISN and IAEA data library FENDL2.0/MG were also included for comparison, where the origination of the data used is different from that of HENDL1.0/MG.展开更多
This work illustrates the innovative results obtained by applying the recently developed the 2<sup>nd</sup>-order predictive modeling methodology called “2<sup>nd</sup>- BERRU-PM”, where the ...This work illustrates the innovative results obtained by applying the recently developed the 2<sup>nd</sup>-order predictive modeling methodology called “2<sup>nd</sup>- BERRU-PM”, where the acronym BERRU denotes “best-estimate results with reduced uncertainties” and “PM” denotes “predictive modeling.” The physical system selected for this illustrative application is a polyethylene-reflected plutonium (acronym: PERP) OECD/NEA reactor physics benchmark. This benchmark is modeled using the neutron transport Boltzmann equation (involving 21,976 uncertain parameters), the solution of which is representative of “large-scale computations.” The results obtained in this work confirm the fact that the 2<sup>nd</sup>-BERRU-PM methodology predicts best-estimate results that fall in between the corresponding computed and measured values, while reducing the predicted standard deviations of the predicted results to values smaller than either the experimentally measured or the computed values of the respective standard deviations. The obtained results also indicate that 2<sup>nd</sup>-order response sensitivities must always be included to quantify the need for including (or not) the 3<sup>rd</sup>- and/or 4<sup>th</sup>-order sensitivities. When the parameters are known with high precision, the contributions of the higher-order sensitivities diminish with increasing order, so that the inclusion of the 1<sup>st</sup>- and 2<sup>nd</sup>-order sensitivities may suffice for obtaining accurate predicted best- estimate response values and best-estimate standard deviations. On the other hand, when the parameters’ standard deviations are sufficiently large to approach (or be outside of) the radius of convergence of the multivariate Taylor-series which represents the response in the phase-space of model parameters, the contributions stemming from the 3<sup>rd</sup>- and even 4<sup>th</sup>-order sensitivities are necessary to ensure consistency between the computed and measured response. In such cases, the use of only the 1<sup>st</sup>-order sensitivities erroneously indicates that the computed results are inconsistent with the respective measured response. Ongoing research aims at extending the 2<sup>nd</sup>-BERRU-PM methodology to fourth-order, thus enabling the computation of third-order response correlations (skewness) and fourth-order response correlations (kurtosis).展开更多
System-on-a-chips with intellectual property cores need a large volume of data for testing. The large volume of test data requires a large testing time and test data memory. Therefore new techniques are needed to opti...System-on-a-chips with intellectual property cores need a large volume of data for testing. The large volume of test data requires a large testing time and test data memory. Therefore new techniques are needed to optimize the test data volume, decrease the testing time, and conquer the ATE memory limitation for SOC designs. This paper presents a new compression method of testing for intellectual property core-based system-on-chip. The proposed method is based on new split- data variable length (SDV) codes that are designed using the split-options along with identification bits in a string of test data. This paper analyses the reduction of test data volume, testing time, run time, size of memory required in ATE and improvement of compression ratio. Experimental results for ISCAS 85 and ISCAS 89 Benchmark circuits show that SDV codes outperform other compression methods with the best compression ratio for test data compression. The decompression architecture for SDV codes is also presented for decoding the implementations of compressed bits. The proposed scheme shows that SDV codes are accessible to any of the variations in the input test data stream.展开更多
基金supported in part by the National Natural Science Foundation of China(62125306)Zhejiang Key Research and Development Project(2024C01163)the State Key Laboratory of Industrial Control Technology,China(ICT2024A06)
文摘In recent decades,control performance monitoring(CPM)has experienced remarkable progress in research and industrial applications.While CPM research has been investigated using various benchmarks,the historical data benchmark(HIS)has garnered the most attention due to its practicality and effectiveness.However,existing CPM reviews usually focus on the theoretical benchmark,and there is a lack of an in-depth review that thoroughly explores HIS-based methods.In this article,a comprehensive overview of HIS-based CPM is provided.First,we provide a novel static-dynamic perspective on data-level manifestations of control performance underlying typical controller capacities including regulation and servo:static and dynamic properties.The static property portrays time-independent variability in system output,and the dynamic property describes temporal behavior driven by closed-loop feedback.Accordingly,existing HIS-based CPM approaches and their intrinsic motivations are classified and analyzed from these two perspectives.Specifically,two mainstream solutions for CPM methods are summarized,including static analysis and dynamic analysis,which match data-driven techniques with actual controlling behavior.Furthermore,this paper also points out various opportunities and challenges faced in CPM for modern industry and provides promising directions in the context of artificial intelligence for inspiring future research.
基金supported by National Natural Science Foundation of China (No.10675123)
文摘HENDL2.0, the latest version of the hybrid evaluated nuclear data library, was developed based upon some evaluated data from FENDL2.1 and ENDF/B-VII. To qualify and validate the working library, an integral test for the neutron production data of HENDL2.0 was performed with a series of existing spherical shell benchmark experiments (such as V, Be, Fe, Pb, Cr, Mn, Cu, Al, Si, Co, Zr, Nb, Mo, W and Ti). These experiments were simulated numerically using HENDL2.0/MG and a home-developed code VisualBUS. Calculations were conducted with both FENDL2.1/MG and FENDL2.1/MC, which are based on a continuous-energy Monte Carlo Code MCNP/4C. By comparison and analysis of the neutron leakage spectra and the integral test, benchmark results of neutron production data are presented in this paper.
文摘At present, big data is very popular, because it has proved to be much successful in many fields such as social media, E-commerce transactions, etc. Big data describes the tools and technologies needed to capture, manage, store, distribute, and analyze petabyte or larger-sized datasets having different structures with high speed. Big data can be structured, unstructured, or semi structured. Hadoop is an open source framework that is used to process large amounts of data in an inexpensive and efficient way, and job scheduling is a key factor for achieving high performance in big data processing. This paper gives an overview of big data and highlights the problems and challenges in big data. It then highlights Hadoop Distributed File System (HDFS), Hadoop MapReduce, and various parameters that affect the performance of job scheduling algorithms in big data such as Job Tracker, Task Tracker, Name Node, Data Node, etc. The primary purpose of this paper is to present a comparative study of job scheduling algorithms along with their experimental results in Hadoop environment. In addition, this paper describes the advantages, disadvantages, features, and drawbacks of various Hadoop job schedulers such as FIFO, Fair, capacity, Deadline Constraints, Delay, LATE, Resource Aware, etc, and provides a comparative study among these schedulers.
基金The project supported by the Natural Science Foundation of Anhui province (No. 01043601)
文摘HENDL1.0/MG, a multi-group working library of the Hybrid Evaluated NuclearData Library, was home-developed by the FDS Team of ASIPP (Institute of Plasma Physics,Chinese Academy of Sciences) on the basis of several national data libraries. To validate andqualify the process of producing HENDL1.0/MG, simulating calculations of a series of existentspherical shell benchmark experiments (Al, Mo, Co, Ti, Mn, W, Be and V) have been performedwith HENDL1.0/MG and the multifunctional neutronics code system named VisualBUS home-developed also by FDS Team.
基金Key Laboratory of Degraded and Unused Land Consolidation Engineering,Ministry of Natural Resources of China,Grant/Award Number:SXDJ2024-22Technology Innovation Centre for Integrated Applications in Remote Sensing and Navigation,Ministry of Natural Resources of China,Grant/Award Number:TICIARSN-2023-06+2 种基金National Natural Science Foundation of China,Grant/Award Numbers:42171446,62302246Zhejiang Provincial Natural Science Foundation of China,Grant/Award Number:LQ23F010008Science and Technology Program of Tianjin,China,Grant/Award Number:23ZGSSSS00010。
文摘Semantic segmentation in the context of 3D point clouds for the railway environment holds a significant economic value,but its development is severely hindered by the lack of suitable and specific datasets.Additionally,the models trained on existing urban road point cloud datasets demonstrate poor generalisation on railway data due to a large domain gap caused by non-overlapping special/rare categories,for example,rail track,track bed etc.To harness the potential of supervised learning methods in the domain of 3D railway semantic segmentation,we introduce RailPC,a new point cloud benchmark.RailPC provides a large-scale dataset with rich annotations for semantic segmentation in the railway environment.Notably,RailPC contains twice the number of annotated points compared to the largest available mobile laser scanning(MLS)point cloud dataset and is the first railway-specific 3D dataset for semantic segmentation.It covers a total of nearly 25 km railway in two different scenes(urban and mountain),with 3 billion points that are finely labelled as 16 most typical classes with respect to railway,and the data acquisition process is completed in China by MLS systems.Through extensive experimentation,we evaluate the performance of advanced scene understanding methods on the annotated dataset and present a synthetic analysis of semantic segmentation results.Based on our findings,we establish some critical challenges towards railway-scale point cloud semantic segmentation.The dataset is available at https://github.com/NNU-GISA/GISA-RailPC,and we will continuously update it based on community feedback.
文摘The multi-group working nuclear data library HENDL1.0/MG is numerically tested with a series of existent benchmark spherical shell experiments (Si, Cr, Fe, Cu, Zr and Nb) by calculations using the multi-functional neutronics code VisualBUS. The ratio of calculated/measured neutron leakage rates and the neutron leakage spectra are presented in tabular and figural forms. The results from the calculations with the code ANISN and IAEA data library FENDL2.0/MG were also included for comparison, where the origination of the data used is different from that of HENDL1.0/MG.
文摘This work illustrates the innovative results obtained by applying the recently developed the 2<sup>nd</sup>-order predictive modeling methodology called “2<sup>nd</sup>- BERRU-PM”, where the acronym BERRU denotes “best-estimate results with reduced uncertainties” and “PM” denotes “predictive modeling.” The physical system selected for this illustrative application is a polyethylene-reflected plutonium (acronym: PERP) OECD/NEA reactor physics benchmark. This benchmark is modeled using the neutron transport Boltzmann equation (involving 21,976 uncertain parameters), the solution of which is representative of “large-scale computations.” The results obtained in this work confirm the fact that the 2<sup>nd</sup>-BERRU-PM methodology predicts best-estimate results that fall in between the corresponding computed and measured values, while reducing the predicted standard deviations of the predicted results to values smaller than either the experimentally measured or the computed values of the respective standard deviations. The obtained results also indicate that 2<sup>nd</sup>-order response sensitivities must always be included to quantify the need for including (or not) the 3<sup>rd</sup>- and/or 4<sup>th</sup>-order sensitivities. When the parameters are known with high precision, the contributions of the higher-order sensitivities diminish with increasing order, so that the inclusion of the 1<sup>st</sup>- and 2<sup>nd</sup>-order sensitivities may suffice for obtaining accurate predicted best- estimate response values and best-estimate standard deviations. On the other hand, when the parameters’ standard deviations are sufficiently large to approach (or be outside of) the radius of convergence of the multivariate Taylor-series which represents the response in the phase-space of model parameters, the contributions stemming from the 3<sup>rd</sup>- and even 4<sup>th</sup>-order sensitivities are necessary to ensure consistency between the computed and measured response. In such cases, the use of only the 1<sup>st</sup>-order sensitivities erroneously indicates that the computed results are inconsistent with the respective measured response. Ongoing research aims at extending the 2<sup>nd</sup>-BERRU-PM methodology to fourth-order, thus enabling the computation of third-order response correlations (skewness) and fourth-order response correlations (kurtosis).
文摘System-on-a-chips with intellectual property cores need a large volume of data for testing. The large volume of test data requires a large testing time and test data memory. Therefore new techniques are needed to optimize the test data volume, decrease the testing time, and conquer the ATE memory limitation for SOC designs. This paper presents a new compression method of testing for intellectual property core-based system-on-chip. The proposed method is based on new split- data variable length (SDV) codes that are designed using the split-options along with identification bits in a string of test data. This paper analyses the reduction of test data volume, testing time, run time, size of memory required in ATE and improvement of compression ratio. Experimental results for ISCAS 85 and ISCAS 89 Benchmark circuits show that SDV codes outperform other compression methods with the best compression ratio for test data compression. The decompression architecture for SDV codes is also presented for decoding the implementations of compressed bits. The proposed scheme shows that SDV codes are accessible to any of the variations in the input test data stream.