In this paper,a typical experiment is carried out based on a high-resolution air-sea coupled model,namely,the coupled ocean-atmosphere-wave-sediment transport(COAWST)model,on both heterogeneous many-core(SW)and homoge...In this paper,a typical experiment is carried out based on a high-resolution air-sea coupled model,namely,the coupled ocean-atmosphere-wave-sediment transport(COAWST)model,on both heterogeneous many-core(SW)and homogenous multicore(Intel)supercomputing platforms.We construct a hindcast of Typhoon Lekima on both the SW and Intel platforms,compare the simulation results between these two platforms and compare the key elements of the atmospheric and ocean modules to reanalysis data.The comparative experiment in this typhoon case indicates that the domestic many-core computing platform and general cluster yield almost no differences in the simulated typhoon path and intensity,and the differences in surface pressure(PSFC)in the WRF model and sea surface temperature(SST)in the short-range forecast are very small,whereas a major difference can be identified at high latitudes after the first 10 days.Further heat budget analysis verifies that the differences in SST after 10 days are mainly caused by shortwave radiation variations,as influenced by subsequently generated typhoons in the system.These typhoons generated in the hindcast after the first 10 days attain obviously different trajectories between the two platforms.展开更多
ion remains significant potential.This paper proposes an enhanced MapReduce framework for geo-distributed supercomputing Internet to minimize the necessity for data transmission across data centers.Leveraging hierarch...ion remains significant potential.This paper proposes an enhanced MapReduce framework for geo-distributed supercomputing Internet to minimize the necessity for data transmission across data centers.Leveraging hierarchical scheduling techniques,the framework optimizes data locality to mitigate network latency and bandwidth consumption during reduce operations,thereby reducing overall job execution times.The paper introduces a mathematical model for task scheduling within supercomputing Internet and formally describes the data transmission process among data centers.In the job scheduling phase,our framework facilitates efficient overlap of transferring and computing through pre-selected data centers.Meanwhile,in the data transmission phase,the framework aggregate data to reduce the frequency of transmission,thus alleviating the adverse effects on transmission of hierarchical network architecture.Comparative analysis with existing methods demonstrates the efficacy of the proposed framework in addressing similar computational challenges.Empirical evaluations underscore the effectiveness of our method in practice.展开更多
Supercomputing technology has been supporting the solution of cutting-edge scientific and complex engineering problems since its inception—serving as a comprehensive representation of the most advanced computer hardw...Supercomputing technology has been supporting the solution of cutting-edge scientific and complex engineering problems since its inception—serving as a comprehensive representation of the most advanced computer hardware and software technologies over a period of time.Over the course of nearly 80 years of development,supercomputing has progressed from being oriented towards computationally intensive tasks,to being oriented towards a hybrid of computationally and data-intensive tasks.Driven by the continuous development of high performance data analytics(HPDA)applications—such as big data,deep learning,and other intelligent tasks—supercomputing storage systems are facing challenges such as a sudden increase in data volume for computational processing tasks,increased and diversified computing power of supercomputing systems,and higher reliability and availability requirements.Based on this,data-intensive supercomputing,which is deeply integrated with data centers and smart computing centers,aims to solve the problems of complex data type optimization,mixed-load optimization,multi-protocol support,and interoperability on the storage system—thereby becoming the main protagonist of research and development today and for some time in the future.This paper first introduces key concepts in HPDA and data-intensive computing,and then illustrates the extent to which existing platforms support data-intensive applications by analyzing the most representative supercomputing platforms today(Fugaku,Summit,Sunway TaihuLight,and Tianhe 2A).This is followed by an illustration of the actual demand for data-intensive applications in today’s mainstream scientific and industrial communities from the perspectives of both scientific and commercial applications.Next,we provide an outlook on future trends and potential challenges data-intensive supercomputing is facing.In a word,this paper provides researchers and practitioners with a quick overview of the key concepts and developments in supercomputing,and captures the current and future data-intensive supercomputing research hotspots and key issues that need to be addressed.展开更多
The overall efficiency of an extreme-scale supercomputer largely relies on the performance of its network interconnects.Several of the state of the art supercomputers use networks based on the increasingly popular Dra...The overall efficiency of an extreme-scale supercomputer largely relies on the performance of its network interconnects.Several of the state of the art supercomputers use networks based on the increasingly popular Dragonfly topology.It is crucial to study the behavior and performance of different parallel applications running on Dragonfly networks in order to make optimal system configurations and design choices,such as job scheduling and routing strategies.However,in order to study these temporal network behavior,we would need a tool to analyze and correlate numerous sets of multivariate time-series data collected from the Dragonfly's multi-level hierarchies.This paper presents such a tool-a visual analytics system-that uses the Dragonfly network to investigate the temporal behavior and optimize the communication performance of a supercomputer.We coupled interactive visualization with time-series analysis methods to help reveal hidden patterns in the network behavior with respect to different parallel applications and system configurations.Our system also provides multiple coordinated views for connecting behaviors observed at different levels of the network hierarchies,which effectively helps visual analysis tasks.We demonstrate the effectiveness of the system with a set of case studies.Our system and findings can not only help improve the communication performance of supercomputing applications,but also the network performance of next-generation supercomputers.展开更多
This issue focuses on the topic of innovations in supercomputing techniques.Six invited papers are finally selected based on a peer review procedure,which cover research progress of China’s supercomputing,interconnec...This issue focuses on the topic of innovations in supercomputing techniques.Six invited papers are finally selected based on a peer review procedure,which cover research progress of China’s supercomputing,interconnection network,performance evaluation and parallel algorithm.Prof.Yutong Lu summarizes the recent progress of supercomputing system in China by introducing the three pre-Exascale supercomputers.展开更多
Large-scale atomistic simulation of low-dimensional silicon nanostructures has been implemented on a heterogeneous supercomputer equipped with a large number of GPU-like accelerators(GLA).In the simulation,an innovati...Large-scale atomistic simulation of low-dimensional silicon nanostructures has been implemented on a heterogeneous supercomputer equipped with a large number of GPU-like accelerators(GLA).In the simulation,an innovative parallel algorithm was developed for the combined utilization of the dynamic neighbor and static neighbor list algorithms aiming at the different regions of the nanostructures.Furthermore,some optimization techniques were performed for the computationally intensive many-body force evaluation between atoms,such as SIMD vectorization,manual loop unrolling,pre-calculation of memory addresses and reordering of data structure etc.Finally,the simulation achieved the excellent weak and strong scalabilities in the parallel implementation,where up to 805.3 billion silicon atoms were simulated.This development suggests an exciting future of predicting the thermodynamic properties of low-dimensional nanostructures.展开更多
With the continuous improvement of supercomputer performance and the integration of artificial intelligence with traditional scientific computing,the scale of applications is gradually increasing,from millions to tens...With the continuous improvement of supercomputer performance and the integration of artificial intelligence with traditional scientific computing,the scale of applications is gradually increasing,from millions to tens of millions of computing cores,which raises great challenges to achieve high scalability and efficiency of parallel applications on super-large-scale systems.Taking the Sunway exascale prototype system as an example,in this paper we first analyze the challenges of high scalability and high efficiency for parallel applications in the exascale era.To overcome these challenges,the optimization technologies used in the parallel supporting environment software on the Sunway exascale prototype system are highlighted,including the parallel operating system,input/output(I/O)optimization technology,ultra-large-scale parallel debugging technology,10-million-core parallel algorithm,and mixed-precision method.Parallel operating systems and I/O optimization technology mainly support largescale system scaling,while the ultra-large-scale parallel debugging technology,10-million-core parallel algorithm,and mixed-precision method mainly enhance the efficiency of large-scale applications.Finally,the contributions to various applications running on the Sunway exascale prototype system are introduced,verifying the effectiveness of the parallel supporting environment design.展开更多
The Dynamical Density Functional Theory(DDFT)algorithm,derived by associating classical Density Functional Theory(DFT)with the fundamental Smoluchowski dynamical equation,describes the evolution of inhomo-geneous flui...The Dynamical Density Functional Theory(DDFT)algorithm,derived by associating classical Density Functional Theory(DFT)with the fundamental Smoluchowski dynamical equation,describes the evolution of inhomo-geneous fluid density distributions over time.It plays a significant role in studying the evolution of density distributions over time in inhomogeneous systems.The Sunway Bluelight II supercomputer,as a new generation of China’s developed supercomputer,possesses powerful computational capabilities.Porting and optimizing industrial software on this platform holds significant importance.For the optimization of the DDFT algorithm,based on the Sunway Bluelight II supercomputer and the unique hardware architecture of the SW39000 processor,this work proposes three acceleration strategies to enhance computational efficiency and performance,including direct parallel optimization,local-memory constrained optimization for CPEs,and multi-core groups collaboration and communication optimization.This method combines the characteristics of the program’s algorithm with the unique hardware architecture of the Sunway Bluelight II supercomputer,optimizing the storage and transmission structures to achieve a closer integration of software and hardware.For the first time,this paper presents Sunway-Dynamical Density Functional Theory(SW-DDFT).Experimental results show that SW-DDFT achieves a speedup of 6.67 times within a single-core group compared to the original DDFT implementation,with six core groups(a total of 384 CPEs),the maximum speedup can reach 28.64 times,and parallel efficiency can reach 71%,demonstrating excellent acceleration performance.展开更多
In June 2018, the United States claimed the No. 1 position in supercomputing according to TOP500, which ranks the top 500 most powerful computer systems in the world [1]. The US Department of Energy’s Summit machine ...In June 2018, the United States claimed the No. 1 position in supercomputing according to TOP500, which ranks the top 500 most powerful computer systems in the world [1]. The US Department of Energy’s Summit machine (Fig. 1)[1] claimed this distinction, which previously had been held by China’s Sunway TaihuLight supercomputer.展开更多
The first in China 10~9 sparallel supercomputer, named as Yinhe-Ⅱ, had been manufac-tured by Science-technological University of National Defence. The main feature of thesupercomputer are: 4-processor system, the pri...The first in China 10~9 sparallel supercomputer, named as Yinhe-Ⅱ, had been manufac-tured by Science-technological University of National Defence. The main feature of thesupercomputer are: 4-processor system, the principle frequence 50 MHz, the word length 64 byte,the main memory 256 Mb, two individual input / output subsystems, > 10~9 operations per sec-展开更多
China’s first supercomputer capable of 100 million calculations per second was the YH-1,which was independently developed by the Institute of Computer Science at the National University of Defense Technology(NUDT)bet...China’s first supercomputer capable of 100 million calculations per second was the YH-1,which was independently developed by the Institute of Computer Science at the National University of Defense Technology(NUDT)between 1978 and 1983.YH-1 played an important role in China’s national defense construction and national economic development.It made China one of the few countries in the world to successfully develop a supercomputer.Based on original archive documents,interviews with relevant personnel,and an analysis of the technological parameters of the supercomputers YH-1 in China and Cray-1 in the United States,this paper reviews in detail the historic process of the development of YH-1,analyzing its innovation and summarizing the experience and lessons learned from it.This analysis is significant for current military-civilian integration,and the commercialization of university research findings in China.展开更多
As an important branch of information technology, high-performance computing has expanded its application field and its influence has been expanding. High-performance computing is always a key area of application in m...As an important branch of information technology, high-performance computing has expanded its application field and its influence has been expanding. High-performance computing is always a key area of application in meteorology. We used field research and literature review methods to study the application of high performance computing in China’s meteorological department, and obtained the following results: 1) China Meteorological Department gradually established the first high-performance computer system since 1978. High-performance computing services can support operational numerical weather prediction models. 2) The Chinese meteorological department has always used the relatively advanced high-performance computing technology, and the business system capability has been continuously improved. The computing power has become an important symbol of the level of meteorological modernization. 3) High-performance computing technology and meteorological numerical forecasting applications are increasingly integrated, and continue to innovate and develop. 4) In the future, high-performance computing resource management will gradually transit from the current local pre-allocation mode to the local remote unified scheduling and shared use. In summary, we have come to the conclusion that the performance calculation business of the meteorological department will usher in a better tomorrow.展开更多
We have demonstrated the application of the world’s fastest supercomputer Fugaku located in Japan to select the COVID-19 drugs and stopping the pandemic spread. Using computer simulation out of 2128 potential drug ca...We have demonstrated the application of the world’s fastest supercomputer Fugaku located in Japan to select the COVID-19 drugs and stopping the pandemic spread. Using computer simulation out of 2128 potential drug candidates, the world’s fastest supercomputer picked 30 most effective and potential drugs. Twelve of them are under clinical trials outside Japan;some are being tested in Japan. The computer reduced the computation time from one year to 10 days when compared to second superfast computer of the world. Fugaku supercomputer was employed to know the behavior of airborne aerosol COVID-19 virus. 3Cs were suggested: avoid closed and crowded spaces and contacts to stop the pandemic spread. The progress in vaccine development and proper use and type of mask has also been described in this article. The article will benefit greatly to stop spreading and treating the pandemic COVID-19.展开更多
Exploring the human brain is perhaps the most challenging and fascinating scientific issue in the 21st century.It will facilitate the development of various aspects of the society,including economics,education,health ...Exploring the human brain is perhaps the most challenging and fascinating scientific issue in the 21st century.It will facilitate the development of various aspects of the society,including economics,education,health care,national defense and daily life.The artificial intelligence techniques are becoming useful as an alternate method of classical techniques or as a component of an integrated system.They are used to solve complicated problems in various fields and becoming increasingly popular nowadays.Especially,the investigation of human brain will promote the artificial intelligence techniques,utilizing the accumulating knowledge of neuroscience,brain-machine interface techniques,algorithms of spiking neural networks and neuromorphic supercomputers.Consequently,we provide a comprehensive survey of the research and motivations for brain-inspired artificial intelligence and its engineering over its history.The goals of this work are to provide a brief review of the research associated with brain-inspired artificial intelligence and its related engineering techniques,and to motivate further work by elucidating challenges in the field where new researches are required.展开更多
1|Introduction Achieving practical quantum computers(PQCs)each based on millions and even billons of integrated quantum bits(qubits)is essential for tackling real-world computational tasks involving quantum phenomena ...1|Introduction Achieving practical quantum computers(PQCs)each based on millions and even billons of integrated quantum bits(qubits)is essential for tackling real-world computational tasks involving quantum phenomena at atomic and molecular levels[1,2]such as drug discovery[3]and materials design[4];conventional supercomputers based on digital technology are inherently inefficient for such problems.Our recent analysis[5]of dimensional scalability for transmon qubit(i.e.,transmission line shunted plasma oscillation qubit[6]).展开更多
The authors regret that the acknowledgment section in the final submitted version is unfortunately left out.The section should be``Acknowledgments This study is supported by National Natural Science Foundation of Chin...The authors regret that the acknowledgment section in the final submitted version is unfortunately left out.The section should be``Acknowledgments This study is supported by National Natural Science Foundation of China(41925017).The calculations were partly conducted at supercomputing center of University of Science and Technology of China.''展开更多
With supercomputing and intelligent computing convergence,the Supercomputer Internet is proposed to build,deploy,and run convergence applications using cloud-native technologies.Message Passing Interface(MPI)is a repr...With supercomputing and intelligent computing convergence,the Supercomputer Internet is proposed to build,deploy,and run convergence applications using cloud-native technologies.Message Passing Interface(MPI)is a representative class of supercomputing applications in parallel computing environments.Live migration is the process of transferring a running application to a different physical location with minimal downtime that enables a number of useful application management capabilities such as load balancing,resource consolidation,and fault tolerance.While several works have been studying live migration for MPI workloads,most require modifying the operating system kernel,which hinders its broader adoption in data centers.This paper uses container technology and the CRIU tool to implement checkpointing and restarting a single container in MPI containerized environments,while ensuring the continuous execution of the MPI program.The paper has validated the feasibility of live migration for MPI workloads by testing with NAS Parallel Benchmarks(NPB),LAMMPS,and GROMACS.The paper discusses the impact of migration on MPI timing functions and proposes solutions.The paper observes a slight improvement in MPI computational performance due to migration,while also noting an increase in communication latency during the iterative process.展开更多
High-throughput computing tasks are a typical class of computational tasks in high-performance computing.They are commonly used for large-scale data analysis in high-energy physics,biomedicine,and other fields.These t...High-throughput computing tasks are a typical class of computational tasks in high-performance computing.They are commonly used for large-scale data analysis in high-energy physics,biomedicine,and other fields.These tasks usually include a large number of small tasks that are independent of each other but have a huge demand for computing resources.In the current HPC resource management pattern,users tend to estimate a certain amount of resource demand and assign tasks to be executed after the resources are satisfied,which often results in a long waiting time for resources.This paper proposes a non-intrusive Function-as-a-Service(FaaS)framework for the supercomputer Internet,called SuperFaaS.SuperFaaS is compatible with existing HPC resource management systems and supports elastic provisioning of HPC computing resources.SuperFaaS ensures the stable execution of tasks through resource reuse,monitoring,and fault-tolerance mechanisms.Tests show that SuperFaaS can achieve the service performance overhead of Openwhisk or even better.Using the drug screening software AutoDock-Vina to calculate 20,000 drug molecule permutations on a real supercomputing system,the results show that SuperFaaS can greatly reduce the total task completion time(including resource waiting time),and the requested resources can achieve more than 95%effective utilization.展开更多
Network technology is the basis for large-scale high-efficiency network computing, such as supercomputing, cloud computing, big data processing, and artificial intelligence computing. The network technologies of netwo...Network technology is the basis for large-scale high-efficiency network computing, such as supercomputing, cloud computing, big data processing, and artificial intelligence computing. The network technologies of network computing systems in different fields not only learn from each other but also have targeted design and optimization. Considering it comprehensively,three development trends, i.e., integration, differentiation, and optimization, are summarized in this paper for network technologies in different fields. Integration reflects that there are no clear boundaries for network technologies in different fields, differentiation reflects that there are some unique solutions in different application fields or innovative solutions under new application requirements,and optimization reflects that there are some optimizations for specific scenarios. This paper can help academic researchers consider what should be done in the future and industry personnel consider how to build efficient practical network systems.展开更多
With various exascale systems in different countries planned over the next three to five years, developing application software for such unprecedented computing capabilities and parallel scaling becomes a major challe...With various exascale systems in different countries planned over the next three to five years, developing application software for such unprecedented computing capabilities and parallel scaling becomes a major challenge. In this study, we start our discussion with the current 125-Pflops Sunway TaihuLight system in China and its related application challenges and solutions. Based on our current experience with Sunway TaihuLight, we provide a projection into the next decade and discuss potential challenges and possible trends we would probably observe in future high performance computing software.展开更多
基金This work is supported by the National Key Research and Development Plan program of the Ministry of Science and Technology of China(No.2016YFB0201100)Additionally,this work is supported by the National Laboratory for Marine Science and Technology(Qingdao)Major Project of the Aoshan Science and Technology Innovation Program(No.2018ASKJ01-04)the Open Fundation of Key Laboratory of Marine Science and Numerical Simulation,Ministry of Natural Resources(No.2021-YB-02).
文摘In this paper,a typical experiment is carried out based on a high-resolution air-sea coupled model,namely,the coupled ocean-atmosphere-wave-sediment transport(COAWST)model,on both heterogeneous many-core(SW)and homogenous multicore(Intel)supercomputing platforms.We construct a hindcast of Typhoon Lekima on both the SW and Intel platforms,compare the simulation results between these two platforms and compare the key elements of the atmospheric and ocean modules to reanalysis data.The comparative experiment in this typhoon case indicates that the domestic many-core computing platform and general cluster yield almost no differences in the simulated typhoon path and intensity,and the differences in surface pressure(PSFC)in the WRF model and sea surface temperature(SST)in the short-range forecast are very small,whereas a major difference can be identified at high latitudes after the first 10 days.Further heat budget analysis verifies that the differences in SST after 10 days are mainly caused by shortwave radiation variations,as influenced by subsequently generated typhoons in the system.These typhoons generated in the hindcast after the first 10 days attain obviously different trajectories between the two platforms.
基金supported by the National Natural Science Foundation of China(Grant Nos.62225205,92055213,62302160)the Natural Science Foundation of Hunan Province(Grant Nos.2024JJ6154)+1 种基金the Science and Technology Program of Changsha(kh2301011)Shenzhen Basic Research Project(Natural Science Foundation)(JCYJ20210324140002006).
文摘ion remains significant potential.This paper proposes an enhanced MapReduce framework for geo-distributed supercomputing Internet to minimize the necessity for data transmission across data centers.Leveraging hierarchical scheduling techniques,the framework optimizes data locality to mitigate network latency and bandwidth consumption during reduce operations,thereby reducing overall job execution times.The paper introduces a mathematical model for task scheduling within supercomputing Internet and formally describes the data transmission process among data centers.In the job scheduling phase,our framework facilitates efficient overlap of transferring and computing through pre-selected data centers.Meanwhile,in the data transmission phase,the framework aggregate data to reduce the frequency of transmission,thus alleviating the adverse effects on transmission of hierarchical network architecture.Comparative analysis with existing methods demonstrates the efficacy of the proposed framework in addressing similar computational challenges.Empirical evaluations underscore the effectiveness of our method in practice.
文摘Supercomputing technology has been supporting the solution of cutting-edge scientific and complex engineering problems since its inception—serving as a comprehensive representation of the most advanced computer hardware and software technologies over a period of time.Over the course of nearly 80 years of development,supercomputing has progressed from being oriented towards computationally intensive tasks,to being oriented towards a hybrid of computationally and data-intensive tasks.Driven by the continuous development of high performance data analytics(HPDA)applications—such as big data,deep learning,and other intelligent tasks—supercomputing storage systems are facing challenges such as a sudden increase in data volume for computational processing tasks,increased and diversified computing power of supercomputing systems,and higher reliability and availability requirements.Based on this,data-intensive supercomputing,which is deeply integrated with data centers and smart computing centers,aims to solve the problems of complex data type optimization,mixed-load optimization,multi-protocol support,and interoperability on the storage system—thereby becoming the main protagonist of research and development today and for some time in the future.This paper first introduces key concepts in HPDA and data-intensive computing,and then illustrates the extent to which existing platforms support data-intensive applications by analyzing the most representative supercomputing platforms today(Fugaku,Summit,Sunway TaihuLight,and Tianhe 2A).This is followed by an illustration of the actual demand for data-intensive applications in today’s mainstream scientific and industrial communities from the perspectives of both scientific and commercial applications.Next,we provide an outlook on future trends and potential challenges data-intensive supercomputing is facing.In a word,this paper provides researchers and practitioners with a quick overview of the key concepts and developments in supercomputing,and captures the current and future data-intensive supercomputing research hotspots and key issues that need to be addressed.
基金This research was sponsored by the Advanced Scientific Computing Research Program,the Office of Science,U.SDepartment of Energy through grants DE-SC0014917,DE-SC0012610,and DE-AC02-06CH11357.
文摘The overall efficiency of an extreme-scale supercomputer largely relies on the performance of its network interconnects.Several of the state of the art supercomputers use networks based on the increasingly popular Dragonfly topology.It is crucial to study the behavior and performance of different parallel applications running on Dragonfly networks in order to make optimal system configurations and design choices,such as job scheduling and routing strategies.However,in order to study these temporal network behavior,we would need a tool to analyze and correlate numerous sets of multivariate time-series data collected from the Dragonfly's multi-level hierarchies.This paper presents such a tool-a visual analytics system-that uses the Dragonfly network to investigate the temporal behavior and optimize the communication performance of a supercomputer.We coupled interactive visualization with time-series analysis methods to help reveal hidden patterns in the network behavior with respect to different parallel applications and system configurations.Our system also provides multiple coordinated views for connecting behaviors observed at different levels of the network hierarchies,which effectively helps visual analysis tasks.We demonstrate the effectiveness of the system with a set of case studies.Our system and findings can not only help improve the communication performance of supercomputing applications,but also the network performance of next-generation supercomputers.
文摘This issue focuses on the topic of innovations in supercomputing techniques.Six invited papers are finally selected based on a peer review procedure,which cover research progress of China’s supercomputing,interconnection network,performance evaluation and parallel algorithm.Prof.Yutong Lu summarizes the recent progress of supercomputing system in China by introducing the three pre-Exascale supercomputers.
基金financially supported by the Beijing Natural Science Foundation under grant No.JQ21034the Major Research Program of Henan Province under grant No.201400211300+1 种基金the National Natural Science Foundation of China(NSFC)under grant Nos.21776280,22073103 and 91934302the Strategic Priority Research Program of Chinese Academy of Sciences under grant No.XDC01040100。
文摘Large-scale atomistic simulation of low-dimensional silicon nanostructures has been implemented on a heterogeneous supercomputer equipped with a large number of GPU-like accelerators(GLA).In the simulation,an innovative parallel algorithm was developed for the combined utilization of the dynamic neighbor and static neighbor list algorithms aiming at the different regions of the nanostructures.Furthermore,some optimization techniques were performed for the computationally intensive many-body force evaluation between atoms,such as SIMD vectorization,manual loop unrolling,pre-calculation of memory addresses and reordering of data structure etc.Finally,the simulation achieved the excellent weak and strong scalabilities in the parallel implementation,where up to 805.3 billion silicon atoms were simulated.This development suggests an exciting future of predicting the thermodynamic properties of low-dimensional nanostructures.
基金Project supported by the Key R&D Program of Zhejiang Province,China(No.2022C01250)the National Key R&D Program of China(No.2019YFA0709402)。
文摘With the continuous improvement of supercomputer performance and the integration of artificial intelligence with traditional scientific computing,the scale of applications is gradually increasing,from millions to tens of millions of computing cores,which raises great challenges to achieve high scalability and efficiency of parallel applications on super-large-scale systems.Taking the Sunway exascale prototype system as an example,in this paper we first analyze the challenges of high scalability and high efficiency for parallel applications in the exascale era.To overcome these challenges,the optimization technologies used in the parallel supporting environment software on the Sunway exascale prototype system are highlighted,including the parallel operating system,input/output(I/O)optimization technology,ultra-large-scale parallel debugging technology,10-million-core parallel algorithm,and mixed-precision method.Parallel operating systems and I/O optimization technology mainly support largescale system scaling,while the ultra-large-scale parallel debugging technology,10-million-core parallel algorithm,and mixed-precision method mainly enhance the efficiency of large-scale applications.Finally,the contributions to various applications running on the Sunway exascale prototype system are introduced,verifying the effectiveness of the parallel supporting environment design.
基金supported by National Key Research and Development Program of China under Grant 2024YFE0210800National Natural Science Foundation of China under Grant 62495062Beijing Natural Science Foundation under Grant L242017.
文摘The Dynamical Density Functional Theory(DDFT)algorithm,derived by associating classical Density Functional Theory(DFT)with the fundamental Smoluchowski dynamical equation,describes the evolution of inhomo-geneous fluid density distributions over time.It plays a significant role in studying the evolution of density distributions over time in inhomogeneous systems.The Sunway Bluelight II supercomputer,as a new generation of China’s developed supercomputer,possesses powerful computational capabilities.Porting and optimizing industrial software on this platform holds significant importance.For the optimization of the DDFT algorithm,based on the Sunway Bluelight II supercomputer and the unique hardware architecture of the SW39000 processor,this work proposes three acceleration strategies to enhance computational efficiency and performance,including direct parallel optimization,local-memory constrained optimization for CPEs,and multi-core groups collaboration and communication optimization.This method combines the characteristics of the program’s algorithm with the unique hardware architecture of the Sunway Bluelight II supercomputer,optimizing the storage and transmission structures to achieve a closer integration of software and hardware.For the first time,this paper presents Sunway-Dynamical Density Functional Theory(SW-DDFT).Experimental results show that SW-DDFT achieves a speedup of 6.67 times within a single-core group compared to the original DDFT implementation,with six core groups(a total of 384 CPEs),the maximum speedup can reach 28.64 times,and parallel efficiency can reach 71%,demonstrating excellent acceleration performance.
文摘In June 2018, the United States claimed the No. 1 position in supercomputing according to TOP500, which ranks the top 500 most powerful computer systems in the world [1]. The US Department of Energy’s Summit machine (Fig. 1)[1] claimed this distinction, which previously had been held by China’s Sunway TaihuLight supercomputer.
文摘The first in China 10~9 sparallel supercomputer, named as Yinhe-Ⅱ, had been manufac-tured by Science-technological University of National Defence. The main feature of thesupercomputer are: 4-processor system, the principle frequence 50 MHz, the word length 64 byte,the main memory 256 Mb, two individual input / output subsystems, > 10~9 operations per sec-
文摘China’s first supercomputer capable of 100 million calculations per second was the YH-1,which was independently developed by the Institute of Computer Science at the National University of Defense Technology(NUDT)between 1978 and 1983.YH-1 played an important role in China’s national defense construction and national economic development.It made China one of the few countries in the world to successfully develop a supercomputer.Based on original archive documents,interviews with relevant personnel,and an analysis of the technological parameters of the supercomputers YH-1 in China and Cray-1 in the United States,this paper reviews in detail the historic process of the development of YH-1,analyzing its innovation and summarizing the experience and lessons learned from it.This analysis is significant for current military-civilian integration,and the commercialization of university research findings in China.
文摘As an important branch of information technology, high-performance computing has expanded its application field and its influence has been expanding. High-performance computing is always a key area of application in meteorology. We used field research and literature review methods to study the application of high performance computing in China’s meteorological department, and obtained the following results: 1) China Meteorological Department gradually established the first high-performance computer system since 1978. High-performance computing services can support operational numerical weather prediction models. 2) The Chinese meteorological department has always used the relatively advanced high-performance computing technology, and the business system capability has been continuously improved. The computing power has become an important symbol of the level of meteorological modernization. 3) High-performance computing technology and meteorological numerical forecasting applications are increasingly integrated, and continue to innovate and develop. 4) In the future, high-performance computing resource management will gradually transit from the current local pre-allocation mode to the local remote unified scheduling and shared use. In summary, we have come to the conclusion that the performance calculation business of the meteorological department will usher in a better tomorrow.
文摘We have demonstrated the application of the world’s fastest supercomputer Fugaku located in Japan to select the COVID-19 drugs and stopping the pandemic spread. Using computer simulation out of 2128 potential drug candidates, the world’s fastest supercomputer picked 30 most effective and potential drugs. Twelve of them are under clinical trials outside Japan;some are being tested in Japan. The computer reduced the computation time from one year to 10 days when compared to second superfast computer of the world. Fugaku supercomputer was employed to know the behavior of airborne aerosol COVID-19 virus. 3Cs were suggested: avoid closed and crowded spaces and contacts to stop the pandemic spread. The progress in vaccine development and proper use and type of mask has also been described in this article. The article will benefit greatly to stop spreading and treating the pandemic COVID-19.
文摘Exploring the human brain is perhaps the most challenging and fascinating scientific issue in the 21st century.It will facilitate the development of various aspects of the society,including economics,education,health care,national defense and daily life.The artificial intelligence techniques are becoming useful as an alternate method of classical techniques or as a component of an integrated system.They are used to solve complicated problems in various fields and becoming increasingly popular nowadays.Especially,the investigation of human brain will promote the artificial intelligence techniques,utilizing the accumulating knowledge of neuroscience,brain-machine interface techniques,algorithms of spiking neural networks and neuromorphic supercomputers.Consequently,we provide a comprehensive survey of the research and motivations for brain-inspired artificial intelligence and its engineering over its history.The goals of this work are to provide a brief review of the research associated with brain-inspired artificial intelligence and its related engineering techniques,and to motivate further work by elucidating challenges in the field where new researches are required.
基金financed by the Swedish Governmental Agency for Innovation Systems(Grant VINNOVA,2024-00436)the European QuantEra II Program(Grant 101017733)via the Swedish Research Council(Grant 2021-06025).
文摘1|Introduction Achieving practical quantum computers(PQCs)each based on millions and even billons of integrated quantum bits(qubits)is essential for tackling real-world computational tasks involving quantum phenomena at atomic and molecular levels[1,2]such as drug discovery[3]and materials design[4];conventional supercomputers based on digital technology are inherently inefficient for such problems.Our recent analysis[5]of dimensional scalability for transmon qubit(i.e.,transmission line shunted plasma oscillation qubit[6]).
文摘The authors regret that the acknowledgment section in the final submitted version is unfortunately left out.The section should be``Acknowledgments This study is supported by National Natural Science Foundation of China(41925017).The calculations were partly conducted at supercomputing center of University of Science and Technology of China.''
基金supported by the National Key R&D Program of China Grant 2023YFB3002204。
文摘With supercomputing and intelligent computing convergence,the Supercomputer Internet is proposed to build,deploy,and run convergence applications using cloud-native technologies.Message Passing Interface(MPI)is a representative class of supercomputing applications in parallel computing environments.Live migration is the process of transferring a running application to a different physical location with minimal downtime that enables a number of useful application management capabilities such as load balancing,resource consolidation,and fault tolerance.While several works have been studying live migration for MPI workloads,most require modifying the operating system kernel,which hinders its broader adoption in data centers.This paper uses container technology and the CRIU tool to implement checkpointing and restarting a single container in MPI containerized environments,while ensuring the continuous execution of the MPI program.The paper has validated the feasibility of live migration for MPI workloads by testing with NAS Parallel Benchmarks(NPB),LAMMPS,and GROMACS.The paper discusses the impact of migration on MPI timing functions and proposes solutions.The paper observes a slight improvement in MPI computational performance due to migration,while also noting an increase in communication latency during the iterative process.
基金supported by National Key R&D Program of China Grant 2023YFB3002204.
文摘High-throughput computing tasks are a typical class of computational tasks in high-performance computing.They are commonly used for large-scale data analysis in high-energy physics,biomedicine,and other fields.These tasks usually include a large number of small tasks that are independent of each other but have a huge demand for computing resources.In the current HPC resource management pattern,users tend to estimate a certain amount of resource demand and assign tasks to be executed after the resources are satisfied,which often results in a long waiting time for resources.This paper proposes a non-intrusive Function-as-a-Service(FaaS)framework for the supercomputer Internet,called SuperFaaS.SuperFaaS is compatible with existing HPC resource management systems and supports elastic provisioning of HPC computing resources.SuperFaaS ensures the stable execution of tasks through resource reuse,monitoring,and fault-tolerance mechanisms.Tests show that SuperFaaS can achieve the service performance overhead of Openwhisk or even better.Using the drug screening software AutoDock-Vina to calculate 20,000 drug molecule permutations on a real supercomputing system,the results show that SuperFaaS can greatly reduce the total task completion time(including resource waiting time),and the requested resources can achieve more than 95%effective utilization.
基金Project supported by the National Natural Science Foundation of China (Nos. 61972412, 62202486, and 12102468)。
文摘Network technology is the basis for large-scale high-efficiency network computing, such as supercomputing, cloud computing, big data processing, and artificial intelligence computing. The network technologies of network computing systems in different fields not only learn from each other but also have targeted design and optimization. Considering it comprehensively,three development trends, i.e., integration, differentiation, and optimization, are summarized in this paper for network technologies in different fields. Integration reflects that there are no clear boundaries for network technologies in different fields, differentiation reflects that there are some unique solutions in different application fields or innovative solutions under new application requirements,and optimization reflects that there are some optimizations for specific scenarios. This paper can help academic researchers consider what should be done in the future and industry personnel consider how to build efficient practical network systems.
基金Project supported by the National Key Technology R&D Program of China(No.2016YFA0602200)
文摘With various exascale systems in different countries planned over the next three to five years, developing application software for such unprecedented computing capabilities and parallel scaling becomes a major challenge. In this study, we start our discussion with the current 125-Pflops Sunway TaihuLight system in China and its related application challenges and solutions. Based on our current experience with Sunway TaihuLight, we provide a projection into the next decade and discuss potential challenges and possible trends we would probably observe in future high performance computing software.