The data processing technique and the method determining the optimal number of measured points are studied aiming at the sphericity error measured on a coordinate measurement machine (CMM). The consummate criterion ...The data processing technique and the method determining the optimal number of measured points are studied aiming at the sphericity error measured on a coordinate measurement machine (CMM). The consummate criterion for the minimum zone of spherical surface is analyzed first, and then an approximation technique searching for the minimum sphericity error from the form data is studied. In order to obtain the minimum zone of spherical surface, the radial separation is reduced gradually by moving the center of the concentric spheres along certain directions with certain steps. Therefore the algorithm is precise and efficient. After the appropriate mathematical model for the approximation technique is created, a data processing program is developed accordingly. By processing the metrical data with the developed program, the spherical errors are evaluated when different numbers of measured points are taken from the same sample, and then the corresponding scatter diagram and fit curve for the sample are graphically represented. The optimal number of measured points is determined through regression analysis. Experiment shows that both the data processing technique and the method for determining the optimal number of measured points are effective. On average, the obtained sphericity error is 5.78 μm smaller than the least square solution, whose accuracy is increased by 8.63%; The obtained optimal number of measured points is half of the number usually measured.展开更多
Data show that carbon emissions are increasing due to human energy consumption associated with economic development. As a result, a great deal of attention has been focused on efforts to reduce this growth in carbon e...Data show that carbon emissions are increasing due to human energy consumption associated with economic development. As a result, a great deal of attention has been focused on efforts to reduce this growth in carbon emissions as well as to formulate policies to address and mitigate climate change. Although the majority of previous studies have explored the driving forces underlying Chinese carbon emissions, few have been carried out at the city-level because of the limited availability of relevant energy consumption statistics. Here, we utilize spatial autocorrelation, Markov-chain transitional matrices, a dynamic panel model, and system generalized distance estimation(Sys-GMM) to empirically evaluate the key determinants of carbon emissions at the city-level based on Chinese remote sensing data collected between 1992 and 2013. We also use these data to discuss observed spatial spillover effects taking into account spatiotemporal lag and a range of different geographical and economic weighting matrices. The results of this study suggest that regional discrepancies in city-level carbon emissions have decreased over time, which are consistent with a marked spatial spillover effect, and a ‘club' agglomeration of high-emissions. The evolution of these patterns also shows obvious path dependence, while the results of panel data analysis reveal the presence of a significant U-shaped relationship between carbon emissions and per capita GDP. Data also show that per capita carbon emissions have increased in concert with economic growth in most cities, and that a high-proportion of secondary industry and extensive investment growth have also exerted significant positive effects on city-level carbon emissions across China. In contrast, rapid population agglomeration, improvements in technology, increasing trade openness, and the accessibility and density of roads have all played a role in inhibiting carbon emissions. Thus, in order to reduce emissions, the Chinese government should legislate to inhibit the effects of factors that promote the release of carbon while at the same time acting to encourage those that mitigate this process. On the basis of the analysis presented in this study, we argue that optimizing industrial structures, streamlining extensive investment, increasing the level of technology, and improving road accessibility are all effective approaches to increase energy savings and reduce carbon emissions across China.展开更多
Attempts to determine characters of astronomical objects have been one of major and vibrant activities in both astronomy and data science fields.Instead of a manual inspection,various automated systems are invented to...Attempts to determine characters of astronomical objects have been one of major and vibrant activities in both astronomy and data science fields.Instead of a manual inspection,various automated systems are invented to satisfy the need,including the classification of light curve profiles.A specific Kaggle competition,namely Photometric LSST Astronomical Time-Series Classification Challenge(PLAsTiCC),is launched to gather new ideas of tackling the abovementioned task using the data set collected from the Large Synoptic Survey Telescope(LSST)project.Almost all proposed methods fall into the supervised family with a common aim to categorize each object into one of pre-defined types.As this challenge focuses on developing a predictive model that is robust to classifying unseen data,those previous attempts similarly encounter the lack of discriminate features,since distribution of training and actual test datasets are largely different.As a result,well-known classification algorithms prove to be sub-optimal,while more complicated feature extraction techniques may help to slightly boost the predictive performance.Given such a burden,this research is set to explore an unsupervised alternative to the difficult quest,where common classifiers fail to reach the 50%accuracy mark.A clustering technique is exploited to transform the space of training data,from which a more accurate classifier can be built.In addition to a single clustering framework that provides a comparable accuracy to the front runners of supervised learning,a multiple-clustering alternative is also introduced with improved performance.In fact,it is able to yield a higher accuracy rate of 58.32%from 51.36%that is obtained using a simple clustering.For this difficult problem,it is rather good considering for those achieved by well-known models like support vector machine(SVM)with 51.80%and Naive Bayes(NB)with only 2.92%.展开更多
基金This project is supported by National Natural Science Foundation of China (No.50475117)Municipal Science and Technology Commission of,Tianjin China(No.0431835116).
文摘The data processing technique and the method determining the optimal number of measured points are studied aiming at the sphericity error measured on a coordinate measurement machine (CMM). The consummate criterion for the minimum zone of spherical surface is analyzed first, and then an approximation technique searching for the minimum sphericity error from the form data is studied. In order to obtain the minimum zone of spherical surface, the radial separation is reduced gradually by moving the center of the concentric spheres along certain directions with certain steps. Therefore the algorithm is precise and efficient. After the appropriate mathematical model for the approximation technique is created, a data processing program is developed accordingly. By processing the metrical data with the developed program, the spherical errors are evaluated when different numbers of measured points are taken from the same sample, and then the corresponding scatter diagram and fit curve for the sample are graphically represented. The optimal number of measured points is determined through regression analysis. Experiment shows that both the data processing technique and the method for determining the optimal number of measured points are effective. On average, the obtained sphericity error is 5.78 μm smaller than the least square solution, whose accuracy is increased by 8.63%; The obtained optimal number of measured points is half of the number usually measured.
基金National Natural Science Foundation of China,No.41601151Guangdong Natural Science Foundation,No.2016A030310149
文摘Data show that carbon emissions are increasing due to human energy consumption associated with economic development. As a result, a great deal of attention has been focused on efforts to reduce this growth in carbon emissions as well as to formulate policies to address and mitigate climate change. Although the majority of previous studies have explored the driving forces underlying Chinese carbon emissions, few have been carried out at the city-level because of the limited availability of relevant energy consumption statistics. Here, we utilize spatial autocorrelation, Markov-chain transitional matrices, a dynamic panel model, and system generalized distance estimation(Sys-GMM) to empirically evaluate the key determinants of carbon emissions at the city-level based on Chinese remote sensing data collected between 1992 and 2013. We also use these data to discuss observed spatial spillover effects taking into account spatiotemporal lag and a range of different geographical and economic weighting matrices. The results of this study suggest that regional discrepancies in city-level carbon emissions have decreased over time, which are consistent with a marked spatial spillover effect, and a ‘club' agglomeration of high-emissions. The evolution of these patterns also shows obvious path dependence, while the results of panel data analysis reveal the presence of a significant U-shaped relationship between carbon emissions and per capita GDP. Data also show that per capita carbon emissions have increased in concert with economic growth in most cities, and that a high-proportion of secondary industry and extensive investment growth have also exerted significant positive effects on city-level carbon emissions across China. In contrast, rapid population agglomeration, improvements in technology, increasing trade openness, and the accessibility and density of roads have all played a role in inhibiting carbon emissions. Thus, in order to reduce emissions, the Chinese government should legislate to inhibit the effects of factors that promote the release of carbon while at the same time acting to encourage those that mitigate this process. On the basis of the analysis presented in this study, we argue that optimizing industrial structures, streamlining extensive investment, increasing the level of technology, and improving road accessibility are all effective approaches to increase energy savings and reduce carbon emissions across China.
基金funded by the Security BigData Fusion Project(Office of theMinistry of Higher Education,Science,Research and Innovation).The corresponding author is the project PI.
文摘Attempts to determine characters of astronomical objects have been one of major and vibrant activities in both astronomy and data science fields.Instead of a manual inspection,various automated systems are invented to satisfy the need,including the classification of light curve profiles.A specific Kaggle competition,namely Photometric LSST Astronomical Time-Series Classification Challenge(PLAsTiCC),is launched to gather new ideas of tackling the abovementioned task using the data set collected from the Large Synoptic Survey Telescope(LSST)project.Almost all proposed methods fall into the supervised family with a common aim to categorize each object into one of pre-defined types.As this challenge focuses on developing a predictive model that is robust to classifying unseen data,those previous attempts similarly encounter the lack of discriminate features,since distribution of training and actual test datasets are largely different.As a result,well-known classification algorithms prove to be sub-optimal,while more complicated feature extraction techniques may help to slightly boost the predictive performance.Given such a burden,this research is set to explore an unsupervised alternative to the difficult quest,where common classifiers fail to reach the 50%accuracy mark.A clustering technique is exploited to transform the space of training data,from which a more accurate classifier can be built.In addition to a single clustering framework that provides a comparable accuracy to the front runners of supervised learning,a multiple-clustering alternative is also introduced with improved performance.In fact,it is able to yield a higher accuracy rate of 58.32%from 51.36%that is obtained using a simple clustering.For this difficult problem,it is rather good considering for those achieved by well-known models like support vector machine(SVM)with 51.80%and Naive Bayes(NB)with only 2.92%.