Strategy evaluation and optimization in response to troubling urban issues has become a challenging issue due to increasing social uncertainty,unreliable predictions,and poor decision-making.To address this problem,we...Strategy evaluation and optimization in response to troubling urban issues has become a challenging issue due to increasing social uncertainty,unreliable predictions,and poor decision-making.To address this problem,we propose a universal computational experiment framework with a fine-grained artificial society that is integrated with data-based models.The purpose of the framework is to evaluate the consequences of various combinations of strategies geared towards reaching a Pareto optimum with regards to efficacy versus costs.As an example,by modeling coronavirus disease 2019 mitigation,we show that Pareto frontier nations could achieve better economic growth and more effective epidemic control through the analysis of real-world data.Our work suggests that a nation’s intervention strategy could be optimized based on the measures adopted by Pareto frontier nations through large-scale computational experiments.Our solution has been validated for epidemic control,and it can be generalized to other urban issues as well.展开更多
Literate computing environments,such as the Jupyter(i.e.,Jupyter Notebooks,JupyterLab,and JupyterHub),have been widely used in scientific studies;they allow users to interactively develop scientific code,test algorith...Literate computing environments,such as the Jupyter(i.e.,Jupyter Notebooks,JupyterLab,and JupyterHub),have been widely used in scientific studies;they allow users to interactively develop scientific code,test algorithms,and describe the scientific narratives of the experiments in an integrated document.To scale up scientific analyses,many implemented Jupyter environment architectures encapsulate the whole Jupyter notebooks as reproducible units and autoscale them on dedicated remote infrastructures(e.g.,highperformance computing and cloud computing environments).The existing solutions are stl limited in many ways,e.g.,1)the workflow(or pipeline)is implicit in a notebook,and some steps can be generically used by different code and executed in parallel,but because of the tight cell structure,all steps in the Jupyter notebook have to be executed sequentially and lack of the flexibility of reusing the core code fragments,and 2)there are performance bottlenecks that need to improve the parallelism and scalability when handling extensive input data and complex computation.In this work,we focus on how to manage the workflow in a notebook seamlessly.We 1)encapsulate the reusable cells as RESTful services and containerize them as portal components,2)provide a composition tool for describing workflow logic of those reusable components,and 3)automate the execution on remote cloud infrastructure.Empirically,we validate the solution's usability via a use case from the Ecology and Earth Science domain,illustrating the processing of massive Light Detection and Ranging(LiDAR)data.The demonstration and analysis show that our method is feasible,but that it needs further improvement,especially on integrating distributed workflow scheduling,automatic deployment,and execution to develop as a mature approach.展开更多
Peak mitigation is of interest to power companies as peak periods may require the operator to over provision supply in order to meet the peak demand.Flattening the usage curve can result in cost savings,both for the p...Peak mitigation is of interest to power companies as peak periods may require the operator to over provision supply in order to meet the peak demand.Flattening the usage curve can result in cost savings,both for the power companies and the end users.Integration of renewable energy into the energy infrastructure presents an opportunity to use excess renewable generation to supplement supply and alleviate peaks.In addition,demand side management can shift the usage from peak to off-peak times and reduce the magnitude of peaks.In this work,we present a data driven approach for incentive-based peak mitigation.Understanding user energy profiles is an essential step in this process.We begin by analysing a popular energy research dataset published by the Ausgrid corporation.Extracting aggregated user energy behavior in temporal contexts and semantic linking and contextual clustering give us insight into consumption and rooftop solar generation patterns.We implement,and performance test a blockchain-based prosumer incentivization system.The smart contract logic is based on our analysis of the Ausgrid dataset.Our implementation is capable of supporting 792,540 customers with a reasonably low infrastructure footprint.展开更多
基金supported by the National Natural Science Foundation of China(62173337,21808181,and 72071207)supported by the National Natural Science Foundation of China(71790615,72025405,91846301,72088101)+2 种基金the Hunan Science and Technology Plan Project(2020TP1013 and 2020JJ4673)the Shenzhen Basic Research Project for Development of Science and Technology(JCYJ20200109141218676 and 202008291726500001)the Innovation Team Project of Colleges in Guangdong Province(2020KCXTD040).
文摘Strategy evaluation and optimization in response to troubling urban issues has become a challenging issue due to increasing social uncertainty,unreliable predictions,and poor decision-making.To address this problem,we propose a universal computational experiment framework with a fine-grained artificial society that is integrated with data-based models.The purpose of the framework is to evaluate the consequences of various combinations of strategies geared towards reaching a Pareto optimum with regards to efficacy versus costs.As an example,by modeling coronavirus disease 2019 mitigation,we show that Pareto frontier nations could achieve better economic growth and more effective epidemic control through the analysis of real-world data.Our work suggests that a nation’s intervention strategy could be optimized based on the measures adopted by Pareto frontier nations through large-scale computational experiments.Our solution has been validated for epidemic control,and it can be generalized to other urban issues as well.
基金partially funded by the European Union's Horizon 2020 research and innovation programme by the project CLARIFY under the Marie Sklodowska-Curie grant agreement No 860627by the ARTICONF project grant agreement No 825134+2 种基金by the ENVRI-FAIR project grant agreement No 824068by the BLUECLOUD project grant agreement No 862409by the LifeWatch ERIC.
文摘Literate computing environments,such as the Jupyter(i.e.,Jupyter Notebooks,JupyterLab,and JupyterHub),have been widely used in scientific studies;they allow users to interactively develop scientific code,test algorithms,and describe the scientific narratives of the experiments in an integrated document.To scale up scientific analyses,many implemented Jupyter environment architectures encapsulate the whole Jupyter notebooks as reproducible units and autoscale them on dedicated remote infrastructures(e.g.,highperformance computing and cloud computing environments).The existing solutions are stl limited in many ways,e.g.,1)the workflow(or pipeline)is implicit in a notebook,and some steps can be generically used by different code and executed in parallel,but because of the tight cell structure,all steps in the Jupyter notebook have to be executed sequentially and lack of the flexibility of reusing the core code fragments,and 2)there are performance bottlenecks that need to improve the parallelism and scalability when handling extensive input data and complex computation.In this work,we focus on how to manage the workflow in a notebook seamlessly.We 1)encapsulate the reusable cells as RESTful services and containerize them as portal components,2)provide a composition tool for describing workflow logic of those reusable components,and 3)automate the execution on remote cloud infrastructure.Empirically,we validate the solution's usability via a use case from the Ecology and Earth Science domain,illustrating the processing of massive Light Detection and Ranging(LiDAR)data.The demonstration and analysis show that our method is feasible,but that it needs further improvement,especially on integrating distributed workflow scheduling,automatic deployment,and execution to develop as a mature approach.
基金funded by the Project number 267967:Energix of NFR(Norwegian Research Council)Grant number 825134:ARTICONF of European Union's Horizon 2020 program.
文摘Peak mitigation is of interest to power companies as peak periods may require the operator to over provision supply in order to meet the peak demand.Flattening the usage curve can result in cost savings,both for the power companies and the end users.Integration of renewable energy into the energy infrastructure presents an opportunity to use excess renewable generation to supplement supply and alleviate peaks.In addition,demand side management can shift the usage from peak to off-peak times and reduce the magnitude of peaks.In this work,we present a data driven approach for incentive-based peak mitigation.Understanding user energy profiles is an essential step in this process.We begin by analysing a popular energy research dataset published by the Ausgrid corporation.Extracting aggregated user energy behavior in temporal contexts and semantic linking and contextual clustering give us insight into consumption and rooftop solar generation patterns.We implement,and performance test a blockchain-based prosumer incentivization system.The smart contract logic is based on our analysis of the Ausgrid dataset.Our implementation is capable of supporting 792,540 customers with a reasonably low infrastructure footprint.