A profound understanding of the costs to perform condition assessment on buried drinking water pipeline infrastructure is required for enhanced asset management. Toward this end, an automated and uniform method of col...A profound understanding of the costs to perform condition assessment on buried drinking water pipeline infrastructure is required for enhanced asset management. Toward this end, an automated and uniform method of collecting cost data can provide water utilities a means for viewing, understanding, interpreting and visualizing complex geographically referenced cost information to reveal data relationships, patterns and trends. However, there has been no standard data model that allows automated data collection and interoperability across platforms. The primary objective of this research is to develop a standard cost data model for drinking water pipeline condition assessment projects and to conflate disparate datasets from differing utilities. The capabilities of this model will be further demonstrated through performing trend analyses. Field mapping files will be generated from the standard data model and demonstrated in an interactive web map created using Google Maps API (application programming interface) for JavaScript that allows the user to toggle project examples and to perform regional comparisons. The aggregation of standardized data and further use in mapping applications will help in providing timely access to condition assessment cost information and resources that will lead to enhanced asset management and resource allocation for drinking water utilities.展开更多
In view of the difficulty in predicting the cost data of power transmission and transformation projects at present,a method based on Pearson correlation coefficient-improved particle swarm optimization(IPSO)-extreme l...In view of the difficulty in predicting the cost data of power transmission and transformation projects at present,a method based on Pearson correlation coefficient-improved particle swarm optimization(IPSO)-extreme learning machine(ELM)is proposed.In this paper,the Pearson correlation coefficient is used to screen out the main influencing factors as the input-independent variables of the ELM algorithm and IPSO based on a ladder-structure coding method is used to optimize the number of hidden-layer nodes,input weights and bias values of the ELM.Therefore,the prediction model for the cost data of power transmission and transformation projects based on the Pearson correlation coefficient-IPSO-ELM algorithm is constructed.Through the analysis of calculation examples,it is proved that the prediction accuracy of the proposed method is higher than that of other algorithms,which verifies the effectiveness of the model.展开更多
Virtual data center is a new form of cloud computing concept applied to data center. As one of the most important challenges, virtual data center embedding problem has attracted much attention from researchers. In dat...Virtual data center is a new form of cloud computing concept applied to data center. As one of the most important challenges, virtual data center embedding problem has attracted much attention from researchers. In data centers, energy issue is very important for the reality that data center energy consumption has increased by dozens of times in the last decade. In this paper, we are concerned about the cost-aware multi-domain virtual data center embedding problem. In order to solve this problem, this paper first addresses the energy consumption model. The model includes the energy consumption model of the virtual machine node and the virtual switch node, to quantify the energy consumption in the virtual data center embedding process. Based on the energy consumption model above, this paper presents a heuristic algorithm for cost-aware multi-domain virtual data center embedding. The algorithm consists of two steps: inter-domain embedding and intra-domain embedding. Inter-domain virtual data center embedding refers to dividing virtual data center requests into several slices to select the appropriate single data center. Intra-domain virtual data center refers to embedding virtual data center requests in each data center. We first propose an inter-domain virtual data center embedding algorithm based on label propagation to select the appropriate single data center. We then propose a cost-aware virtual data center embedding algorithm to perform the intra-domain data center embedding. Extensive simulation results show that our proposed algorithm in this paper can effectively reduce the energy consumption while ensuring the success ratio of embedding.展开更多
Enterprises have vast amounts of customer behavior data in the era of big data. How to take advantage of these data to evaluate custom forfeit risks effectively is a common issue faced by enterprises. Most of traditio...Enterprises have vast amounts of customer behavior data in the era of big data. How to take advantage of these data to evaluate custom forfeit risks effectively is a common issue faced by enterprises. Most of traditional customer churn predicting models ignore customer segmentation and misclassification cost, which reduces the rationality of model. Dealing with these deficiencies, we established a research model of customer churn based on customer segmentation and misclassification cost. We utilized this model to analyze customer behavior data of a telecom company. The results show that this model is better than those models without customer segmentation and misclassification cost in terms of the performance, accuracy and coverage of model.展开更多
文摘A profound understanding of the costs to perform condition assessment on buried drinking water pipeline infrastructure is required for enhanced asset management. Toward this end, an automated and uniform method of collecting cost data can provide water utilities a means for viewing, understanding, interpreting and visualizing complex geographically referenced cost information to reveal data relationships, patterns and trends. However, there has been no standard data model that allows automated data collection and interoperability across platforms. The primary objective of this research is to develop a standard cost data model for drinking water pipeline condition assessment projects and to conflate disparate datasets from differing utilities. The capabilities of this model will be further demonstrated through performing trend analyses. Field mapping files will be generated from the standard data model and demonstrated in an interactive web map created using Google Maps API (application programming interface) for JavaScript that allows the user to toggle project examples and to perform regional comparisons. The aggregation of standardized data and further use in mapping applications will help in providing timely access to condition assessment cost information and resources that will lead to enhanced asset management and resource allocation for drinking water utilities.
文摘In view of the difficulty in predicting the cost data of power transmission and transformation projects at present,a method based on Pearson correlation coefficient-improved particle swarm optimization(IPSO)-extreme learning machine(ELM)is proposed.In this paper,the Pearson correlation coefficient is used to screen out the main influencing factors as the input-independent variables of the ELM algorithm and IPSO based on a ladder-structure coding method is used to optimize the number of hidden-layer nodes,input weights and bias values of the ELM.Therefore,the prediction model for the cost data of power transmission and transformation projects based on the Pearson correlation coefficient-IPSO-ELM algorithm is constructed.Through the analysis of calculation examples,it is proved that the prediction accuracy of the proposed method is higher than that of other algorithms,which verifies the effectiveness of the model.
基金supported in part by the following funding agencies of China:National Natural Science Foundation under Grant 61602050 and U1534201National Key Research and Development Program of China under Grant 2016QY01W0200
文摘Virtual data center is a new form of cloud computing concept applied to data center. As one of the most important challenges, virtual data center embedding problem has attracted much attention from researchers. In data centers, energy issue is very important for the reality that data center energy consumption has increased by dozens of times in the last decade. In this paper, we are concerned about the cost-aware multi-domain virtual data center embedding problem. In order to solve this problem, this paper first addresses the energy consumption model. The model includes the energy consumption model of the virtual machine node and the virtual switch node, to quantify the energy consumption in the virtual data center embedding process. Based on the energy consumption model above, this paper presents a heuristic algorithm for cost-aware multi-domain virtual data center embedding. The algorithm consists of two steps: inter-domain embedding and intra-domain embedding. Inter-domain virtual data center embedding refers to dividing virtual data center requests into several slices to select the appropriate single data center. Intra-domain virtual data center refers to embedding virtual data center requests in each data center. We first propose an inter-domain virtual data center embedding algorithm based on label propagation to select the appropriate single data center. We then propose a cost-aware virtual data center embedding algorithm to perform the intra-domain data center embedding. Extensive simulation results show that our proposed algorithm in this paper can effectively reduce the energy consumption while ensuring the success ratio of embedding.
文摘Enterprises have vast amounts of customer behavior data in the era of big data. How to take advantage of these data to evaluate custom forfeit risks effectively is a common issue faced by enterprises. Most of traditional customer churn predicting models ignore customer segmentation and misclassification cost, which reduces the rationality of model. Dealing with these deficiencies, we established a research model of customer churn based on customer segmentation and misclassification cost. We utilized this model to analyze customer behavior data of a telecom company. The results show that this model is better than those models without customer segmentation and misclassification cost in terms of the performance, accuracy and coverage of model.