Further improving the railway innovation capacity and technological strength is the important goal of the 14th Five-Year Plan for railway scientific and technological innovation.It includes promoting the deep integrat...Further improving the railway innovation capacity and technological strength is the important goal of the 14th Five-Year Plan for railway scientific and technological innovation.It includes promoting the deep integration of cutting-edge technologies with the railway systems,strengthening the research and application of intelligent railway technologies,applying green computing technologies and advancing the collaborative sharing of transportation big data.The high-speed rail system tasks need to process huge amounts of data and heavy workload with the requirement of ultra-fast response.Therefore,it is of great necessity to promote computation efficiency by applying High Performance Computing(HPC)to high-speed rail systems.The HPC technique is a great solution for improving the performance,efficiency,and safety of high-speed rail systems.In this review,we introduce and analyze the application research of high performance computing technology in the field of highspeed railways.These HPC applications are cataloged into four broad categories,namely:fault diagnosis,network and communication,management system,and simulations.Moreover,challenges and issues to be addressed are discussed and further directions are suggested.展开更多
Cloud computing technology is changing the development and usage patterns of IT infrastructure and applications. Virtualized and distributed systems as well as unified management and scheduling has greatly im proved c...Cloud computing technology is changing the development and usage patterns of IT infrastructure and applications. Virtualized and distributed systems as well as unified management and scheduling has greatly im proved computing and storage. Management has become easier, andOAM costs have been significantly reduced. Cloud desktop technology is develop ing rapidly. With this technology, users can flexibly and dynamically use virtual ma chine resources, companies' efficiency of using and allocating resources is greatly improved, and information security is ensured. In most existing virtual cloud desk top solutions, computing and storage are bound together, and data is stored as im age files. This limits the flexibility and expandability of systems and is insufficient for meetinz customers' requirements in different scenarios.展开更多
High computational performance is extremely important for climate system models, especially in ultra-high-resolution model development. In this study, the computational performance of the Finite-volume Atmospheric Mod...High computational performance is extremely important for climate system models, especially in ultra-high-resolution model development. In this study, the computational performance of the Finite-volume Atmospheric Model of the IAP/LASG (FAMIL) was comprehensively evaluated on Tianhe-2, which was the world's top-ranked supercomputer from June 2013 to May 2016. The standardized Atmospheric Model Inter-comparison Project (AMIP) type of experiment was carried out that focused on the computational performance of each node as well as the simulation year per day (SYPD), the running cost speedup, and the scalability of the FAMIL. The results indicated that (1) based on five indexes (CPU usage, percentage of CPU kernel mode that occupies CPU time and of message passing waiting time (CPU SW), code vectorization (VEC), average of Gflops (Gflops_ AVE), and peak of Gflops (Gflops_PK)), FAMIL shows excellent computational performance on every Tianhe-2 computing node; (2) considering SYPD and the cost speedup of FAMIL systematically, the optimal Message Passing Interface (MPI) numbers of processors (MNPs) choice appears when FAMIL use 384 and 1536 MNPs for C96 (100 km) and C384 (25 km), respectively; and (3) FAMIL shows positive scalability with increased threads to drive the model. Considering the fast network speed and acceleration card in the MIC architecture on Tianhe-2, there is still significant room to improve the computational performance of FAMIL.展开更多
A new direct method for solving unsymmetrical sparse linear systems(USLS) arising from meshless methods was introduced. Computation of certain meshless methods such as meshless local Petrov-Galerkin (MLPG) method ...A new direct method for solving unsymmetrical sparse linear systems(USLS) arising from meshless methods was introduced. Computation of certain meshless methods such as meshless local Petrov-Galerkin (MLPG) method need to solve large USLS. The proposed solution method for unsymmetrical case performs factorization processes symmetrically on the upper and lower triangular portion of matrix, which differs from previous work based on general unsymmetrical process, and attains higher performance. It is shown that the solution algorithm for USLS can be simply derived from the existing approaches for the symmetrical case. The new matrix factorization algorithm in our method can be implemented easily by modifying a standard JKI symmetrical matrix factorization code. Multi-blocked out-of-core strategies were also developed to expand the solution scale. The approach convincingly increases the speed of the solution process, which is demonstrated with the numerical tests.展开更多
In the previous papers, a high performance sparse static solver with two-level unrolling based on a cell-sparse storage scheme was reported. Although the solver reaches quite a high efficiency for a big percentage of ...In the previous papers, a high performance sparse static solver with two-level unrolling based on a cell-sparse storage scheme was reported. Although the solver reaches quite a high efficiency for a big percentage of finite element analysis benchmark tests, the MFLOPS (million floating operations per second) of LDL^T factorization of benchmark tests vary on a Dell Pentium IV 850 MHz machine from 100 to 456 depending on the average size of the super-equations, i.e., on the average depth of unrolling. In this paper, a new sparse static solver with two-level unrolling that employs the concept of master-equations and searches for an appropriate depths of unrolling is proposed. The new solver provides higher MFLOPS for LDL^T factorization of benchmark tests, and therefore speeds up the solution process.展开更多
Aeroacoustic performance of fans is essential due to their widespread application. Therefore, the original aim of this paper is to evaluate the generated noise owing to different geometric parameters. In current study...Aeroacoustic performance of fans is essential due to their widespread application. Therefore, the original aim of this paper is to evaluate the generated noise owing to different geometric parameters. In current study, effect of five geometric parameters was investigated on well performance of a Bladeless fan. Airflow through this fan was analyzed simulating a Bladeless fan within a 2 m×2 m×4 m room. Analysis of the flow field inside the fan and evaluating its performance were obtained by solving conservations of mass and momentum equations for aerodynamic investigations and FW-H noise equations for aeroacoustic analysis. In order to design Bladeless fan Eppler 473 airfoil profile was used as the cross section of this fan. Five distinct parameters, namely height of cross section of the fan, outlet angle of the flow relative to the fan axis, thickness of airflow outlet slit, hydraulic diameter and aspect ratio for circular and quadratic cross sections were considered. Validating acoustic code results, we compared numerical solution of FW-H noise equations for NACA0012 with experimental results. FW-H model was selected to predict the noise generated by the Bladeless fan as the numerical results indicated a good agreement with experimental ones for NACA0012. To validate 3-D numerical results, the experimental results of a round jet showed good agreement with those simulation data. In order to indicate the effect of each mentioned parameter on the fan performance, SPL and OASPL diagrams were illustrated.展开更多
This paper proposes algorithm for Increasing Virtual Machine Security Strategy in Cloud Computing computations.Imbalance between load and energy has been one of the disadvantages of old methods in providing server and...This paper proposes algorithm for Increasing Virtual Machine Security Strategy in Cloud Computing computations.Imbalance between load and energy has been one of the disadvantages of old methods in providing server and hosting,so that if two virtual severs be active on a host and energy load be more on a host,it would allocated the energy of other hosts(virtual host)to itself to stay steady and this option usually leads to hardware overflow errors and users dissatisfaction.This problem has been removed in methods based on cloud processing but not perfectly,therefore,providing an algorithm not only will implement a suitable security background but also it will suitably divide energy consumption and load balancing among virtual severs.The proposed algorithm is compared with several previously proposed Security Strategy including SC-PSSF,PSSF and DEEAC.Comparisons show that the proposed method offers high performance computing,efficiency and consumes lower energy in the network.展开更多
Today the PC class machines are quite popular for HPC area, especially on the problemsthat require the good cost/performance ratios. One of the drawback of these machines is the poormemory throughput performance. And ...Today the PC class machines are quite popular for HPC area, especially on the problemsthat require the good cost/performance ratios. One of the drawback of these machines is the poormemory throughput performance. And one of the reasons of the poor performance is depend on the lack of the mapping capability of the TLB which is a buffer to accelerate the virtual memory access. In this report, I present that the mapping capability and the performance can be improved with the multi granularity TLB feature that some processors have. And I also present that the new TLB handling routine can be incorporated into the demand paging system of Linux.展开更多
This paper analyzes the physical potential, computing performance benefi t and power consumption of optical interconnects. Compared with electrical interconnections, optical ones show undoubted advantages based on phy...This paper analyzes the physical potential, computing performance benefi t and power consumption of optical interconnects. Compared with electrical interconnections, optical ones show undoubted advantages based on physical factor analysis. At the same time, since the recent developments drive us to think about whether these optical interconnect technologies with higher bandwidth but higher cost are worthy to be deployed, the computing performance comparison is performed. To meet the increasing demand of large-scale parallel or multi-processor computing tasks, an analytic method to evaluate parallel computing performance ofinterconnect systems is proposed in this paper. Both bandwidth-limit model and full-bandwidth model are under our investigation. Speedup and effi ciency are selected to represent the parallel performance of an interconnect system. Deploying the proposed models, we depict the performance gap between the optical and electrically interconnected systems. Another investigation on power consumption of commercial products showed that if the parallel interconnections are deployed, the unit power consumption will be reduced. Therefore, from the analysis of computing influence and power dissipation, we found that parallel optical interconnect is valuable combination of high performance and low energy consumption. Considering the possible data center under construction, huge power could be saved if parallel optical interconnects technologies are used.展开更多
We detail some of the understudied aspects of the flow inside and around the Hexactinellid Sponge Euplectella aspergillum.By leveraging the flexibility of the Lattice Boltzmann Method,High Performance Computing simula...We detail some of the understudied aspects of the flow inside and around the Hexactinellid Sponge Euplectella aspergillum.By leveraging the flexibility of the Lattice Boltzmann Method,High Performance Computing simulations are performed to dissect the complex conditions corresponding to the actual environment at the bottom of the ocean,at depths between 100 and 1,000 m.These large-scale simulations unveil potential clues on the evolutionary adaptations of these deep-sea sponges in response to the surrounding fluid flow,and they open the path to future investigations at the interface between physics,engineering and biology.展开更多
The integration of clusters,grids,clouds,edges and other computing platforms result in contemporary technology of jungle computing.This novel technique has the aptitude to tackle high performance computation systems a...The integration of clusters,grids,clouds,edges and other computing platforms result in contemporary technology of jungle computing.This novel technique has the aptitude to tackle high performance computation systems and it manages the usage of all computing platforms at a time.Federated learning is a collaborative machine learning approach without centralized training data.The proposed system effectively detects the intrusion attack without human intervention and subsequently detects anomalous deviations in device communication behavior,potentially caused by malicious adversaries and it can emerge with new and unknown attacks.The main objective is to learn overall behavior of an intruder while performing attacks to the assumed target service.Moreover,the updated system model is send to the centralized server in jungle computing,to detect their pattern.Federated learning greatly helps the machine to study the type of attack from each device and this technique paves a way to complete dominion over all malicious behaviors.In our proposed work,we have implemented an intrusion detection system that has high accuracy,low False Positive Rate(FPR)scalable,and versatile for the jungle computing environment.The execution time taken to complete a round is less than two seconds,with an accuracy rate of 96%.展开更多
Cloud computing is expanding widely in the world of IT infrastructure. This is due partly to the cost-saving effect of economies of scale. Fair market conditions can in theory provide a healthy environment to reflect ...Cloud computing is expanding widely in the world of IT infrastructure. This is due partly to the cost-saving effect of economies of scale. Fair market conditions can in theory provide a healthy environment to reflect the most reasonable costs of computations. While fixed cloud pricing provides an attractive low entry barrier for compute-intensive applications, both the consumer and supplier of computing resources can see high efficiency for their investments by participating in auction-based exchanges. There are huge incentives for the cloud provider to offer auctioned resources. However, from the consumer perspective, using these resources is a sparsely discussed challenge. This paper reports a methodology and framework designed to address the challenges of using HPC (High Performance Computing) applications on auction-based cloud clusters. The authors focus on HPC applications and describe a method for determining bid-aware checkpointing intervals. They extend a theoretical model for determining checkpoint intervals using statistical analysis of pricing histories. Also the latest developments in the SpotHPC framework are introduced which aim at facilitating the managed execution of real MPI applications on auction-based cloud environments. The authors use their model to simulate a set of algorithms with different computing and communication densities. The results show the complex interactions between optimal bidding strategies and parallel applications performance.展开更多
With the global trend of pursuing clean energy and decarbonization,power systems have been evolving in a fast pace that we have never seen in the history of electrification.This evolution makes the power system more d...With the global trend of pursuing clean energy and decarbonization,power systems have been evolving in a fast pace that we have never seen in the history of electrification.This evolution makes the power system more dynamic and more distributed,with higher uncertainty.These new power system behaviors bring significant challenges in power system modeling and simulation as more data need to be analyzed for larger systems and more complex models to be solved in a shorter time period.The conventional computing approaches will not be sufficient for future power systems.This paper provides a historical review of computing for power system operation and planning,discusses technology advancements in high performance computing(HPC),and describes the drivers for employing HPC techniques.Some high performance computing application examples with different HPC techniques,including the latest quantum computing,are also presented to show how HPC techniques can help us be well prepared to meet the requirements of power system computing in a clean energy future.展开更多
In recent years, the widespread adoption of parallel computing, especially in multi-core processors and high-performance computing environments, ushered in a new era of efficiency and speed. This trend was particularl...In recent years, the widespread adoption of parallel computing, especially in multi-core processors and high-performance computing environments, ushered in a new era of efficiency and speed. This trend was particularly noteworthy in the field of image processing, which witnessed significant advancements. This parallel computing project explored the field of parallel image processing, with a focus on the grayscale conversion of colorful images. Our approach involved integrating OpenMP into our framework for parallelization to execute a critical image processing task: grayscale conversion. By using OpenMP, we strategically enhanced the overall performance of the conversion process by distributing the workload across multiple threads. The primary objectives of our project revolved around optimizing computation time and improving overall efficiency, particularly in the task of grayscale conversion of colorful images. Utilizing OpenMP for concurrent processing across multiple cores significantly reduced execution times through the effective distribution of tasks among these cores. The speedup values for various image sizes highlighted the efficacy of parallel processing, especially for large images. However, a detailed examination revealed a potential decline in parallelization efficiency with an increasing number of cores. This underscored the importance of a carefully optimized parallelization strategy, considering factors like load balancing and minimizing communication overhead. Despite challenges, the overall scalability and efficiency achieved with parallel image processing underscored OpenMP’s effectiveness in accelerating image manipulation tasks.展开更多
Low temperature complementary metal oxide semiconductor(CMOS)or cryogenic CMOS is a promising avenue for the continuation of Moore’s law while serving the needs of high performance computing.With temperature as a con...Low temperature complementary metal oxide semiconductor(CMOS)or cryogenic CMOS is a promising avenue for the continuation of Moore’s law while serving the needs of high performance computing.With temperature as a control“knob”to steepen the subthreshold slope behavior of CMOS devices,the supply voltage of operation can be reduced with no impact on operating speed.With the optimal threshold voltage engineering,the device ON current can be further enhanced,translating to higher performance.In this article,the experimentally calibrated data was adopted to tune the threshold voltage and investigated the power performance area of cryogenic CMOS at device,circuit and system level.We also presented results from measurement and analysis of functional memory chips fabricated in 28 nm bulk CMOS and 22 nm fully depleted silicon on insulator(FDSOI)operating at cryogenic temperature.Finally,the challenges and opportunities in the further development and deployment of such systems were discussed.展开更多
GPU computing is expected to play an integral part in all modern Exascale supercomputers.It is also expected that higher order Godunov schemes will make up about a significant fraction of the application mix on such s...GPU computing is expected to play an integral part in all modern Exascale supercomputers.It is also expected that higher order Godunov schemes will make up about a significant fraction of the application mix on such supercomputers.It is,therefore,very important to prepare the community of users of higher order schemes for hyperbolic PDEs for this emerging opportunity.Not every algorithm that is used in the space-time update of the solution of hyperbolic PDEs will take well to GPUs.However,we identify a small core of algorithms that take exceptionally well to GPU computing.Based on an analysis of available options,we have been able to identify weighted essentially non-oscillatory(WENO)algorithms for spatial reconstruction along with arbitrary derivative(ADER)algorithms for time extension followed by a corrector step as the winning three-part algorithmic combination.Even when a winning subset of algorithms has been identified,it is not clear that they will port seamlessly to GPUs.The low data throughput between CPU and GPU,as well as the very small cache sizes on modern GPUs,implies that we have to think through all aspects of the task of porting an application to GPUs.For that reason,this paper identifies the techniques and tricks needed for making a successful port of this very useful class of higher order algorithms to GPUs.Application codes face a further challenge—the GPU results need to be practically indistinguishable from the CPU results—in order for the legacy knowledge bases embedded in these applications codes to be preserved during the port of GPUs.This requirement often makes a complete code rewrite impossible.For that reason,it is safest to use an approach based on OpenACC directives,so that most of the code remains intact(as long as it was originally well-written).This paper is intended to be a one-stop shop for anyone seeking to make an OpenACC-based port of a higher order Godunov scheme to GPUs.We focus on three broad and high-impact areas where higher order Godunov schemes are used.The first area is computational fluid dynamics(CFD).The second is computational magnetohydrodynamics(MHD)which has an involution constraint that has to be mimetically preserved.The third is computational electrodynamics(CED)which has involution constraints and also extremely stiff source terms.Together,these three diverse uses of higher order Godunov methodology,cover many of the most important applications areas.In all three cases,we show that the optimal use of algorithms,techniques,and tricks,along with the use of OpenACC,yields superlative speedups on GPUs.As a bonus,we find a most remarkable and desirable result:some higher order schemes,with their larger operations count per zone,show better speedup than lower order schemes on GPUs.In other words,the GPU is an optimal stratagem for overcoming the higher computational complexities of higher order schemes.Several avenues for future improvement have also been identified.A scalability study is presented for a real-world application using GPUs and comparable numbers of high-end multicore CPUs.It is found that GPUs offer a substantial performance benefit over comparable number of CPUs,especially when all the methods designed in this paper are used.展开更多
To improve the efficiency of evolutionary algorithms(EAs)for solving complex problems with large populations,this paper proposes a scalable parallel evolution optimization(SPEO)framework with an elastic asynchronous m...To improve the efficiency of evolutionary algorithms(EAs)for solving complex problems with large populations,this paper proposes a scalable parallel evolution optimization(SPEO)framework with an elastic asynchronous migration(EAM)mechanism.SPEO addresses two main challenges that arise in large-scale parallel EAs:(1)heavy communication workload from extensive information exchange across numerous processors,which reduces computational efficiency,and(2)loss of population diversity due to similar solutions generated and shared by many processors.The EAM mechanism introduces a self-adaptive communication scheme to mitigate communication overhead,while a diversity-preserving buffer helps maintain diversity by filtering similar solutions.Experimental results on eight CEC2014 benchmark functions using up to 512 CPU cores on the Australian National Computational Infrastructure(NCI)platform demonstrate that SPEO not only scales efficiently with an increasing number of processors but also achieves improved solution quality compared to state-of-the-art island-based EAs.展开更多
1 Introduction The perpetual demand for computational power in scientific computing incessantly propels high-performance computing(HPC)systems toward Zettascale computation and even beyond[1,2].Concurrently,the ascens...1 Introduction The perpetual demand for computational power in scientific computing incessantly propels high-performance computing(HPC)systems toward Zettascale computation and even beyond[1,2].Concurrently,the ascension of artificial intelligence(AI)has engendered a marked surge in computational requisites,doubling the required computation performance by every 3.4 months[3].The collective pursuit of ultra-high computational capability has positioned AI and scientific computing as the preeminent twin drivers of HPC.展开更多
High performance computing(HPC)is a powerful tool to accelerate the Kohn–Sham density functional theory(KS-DFT)calculations on modern heterogeneous supercomputers.Here,we describe a massively parallel implementation ...High performance computing(HPC)is a powerful tool to accelerate the Kohn–Sham density functional theory(KS-DFT)calculations on modern heterogeneous supercomputers.Here,we describe a massively parallel implementation of discontinuous Galerkin density functional theory(DGDFT)method on the Sunway Taihu Light supercomputer.The DGDFT method uses the adaptive local basis(ALB)functions generated on-the-fly during the self-consistent field(SCF)iteration to solve the KS equations with high precision comparable to plane-wave basis set.In particular,the DGDFT method adopts a two-level parallelization strategy that deals with various types of data distribution,task scheduling,and data communication schemes,and combines with the master–slave multi-thread heterogeneous parallelism of SW26010 processor,resulting in large-scale HPC KS-DFT calculations on the Sunway Taihu Light supercomputer.We show that the DGDFT method can scale up to 8,519,680 processing cores(131,072 core groups)on the Sunway Taihu Light supercomputer for studying the electronic structures of twodimensional(2 D)metallic graphene systems that contain tens of thousands of carbon atoms.展开更多
基金supported in part by the Talent Fund of Beijing Jiaotong University(2023XKRC017)in part by Research and Development Project of China State Railway Group Co.,Ltd.(P2022Z003).
文摘Further improving the railway innovation capacity and technological strength is the important goal of the 14th Five-Year Plan for railway scientific and technological innovation.It includes promoting the deep integration of cutting-edge technologies with the railway systems,strengthening the research and application of intelligent railway technologies,applying green computing technologies and advancing the collaborative sharing of transportation big data.The high-speed rail system tasks need to process huge amounts of data and heavy workload with the requirement of ultra-fast response.Therefore,it is of great necessity to promote computation efficiency by applying High Performance Computing(HPC)to high-speed rail systems.The HPC technique is a great solution for improving the performance,efficiency,and safety of high-speed rail systems.In this review,we introduce and analyze the application research of high performance computing technology in the field of highspeed railways.These HPC applications are cataloged into four broad categories,namely:fault diagnosis,network and communication,management system,and simulations.Moreover,challenges and issues to be addressed are discussed and further directions are suggested.
文摘Cloud computing technology is changing the development and usage patterns of IT infrastructure and applications. Virtualized and distributed systems as well as unified management and scheduling has greatly im proved computing and storage. Management has become easier, andOAM costs have been significantly reduced. Cloud desktop technology is develop ing rapidly. With this technology, users can flexibly and dynamically use virtual ma chine resources, companies' efficiency of using and allocating resources is greatly improved, and information security is ensured. In most existing virtual cloud desk top solutions, computing and storage are bound together, and data is stored as im age files. This limits the flexibility and expandability of systems and is insufficient for meetinz customers' requirements in different scenarios.
基金supported by the National Natural Science Foundation of China[grant number 41675100],[grant number91337110]the Third Tibetan Plateau Scientific Experiment:Observations for Boundary Layer and Troposphere[GYHY201406001]+1 种基金the Key Research Program of Frontier Sciences,Chinese Academy of Science(CAS)(QYZDY-SSW-DQC018)the Special Program for Applied Research on Super Computation of the NSFC-Guangdong Joint Fund(the 2nd phase)
文摘High computational performance is extremely important for climate system models, especially in ultra-high-resolution model development. In this study, the computational performance of the Finite-volume Atmospheric Model of the IAP/LASG (FAMIL) was comprehensively evaluated on Tianhe-2, which was the world's top-ranked supercomputer from June 2013 to May 2016. The standardized Atmospheric Model Inter-comparison Project (AMIP) type of experiment was carried out that focused on the computational performance of each node as well as the simulation year per day (SYPD), the running cost speedup, and the scalability of the FAMIL. The results indicated that (1) based on five indexes (CPU usage, percentage of CPU kernel mode that occupies CPU time and of message passing waiting time (CPU SW), code vectorization (VEC), average of Gflops (Gflops_ AVE), and peak of Gflops (Gflops_PK)), FAMIL shows excellent computational performance on every Tianhe-2 computing node; (2) considering SYPD and the cost speedup of FAMIL systematically, the optimal Message Passing Interface (MPI) numbers of processors (MNPs) choice appears when FAMIL use 384 and 1536 MNPs for C96 (100 km) and C384 (25 km), respectively; and (3) FAMIL shows positive scalability with increased threads to drive the model. Considering the fast network speed and acceleration card in the MIC architecture on Tianhe-2, there is still significant room to improve the computational performance of FAMIL.
基金Project supported by the National Natural Science Foundation of China (Nos. 10232040, 10572002 and 10572003)
文摘A new direct method for solving unsymmetrical sparse linear systems(USLS) arising from meshless methods was introduced. Computation of certain meshless methods such as meshless local Petrov-Galerkin (MLPG) method need to solve large USLS. The proposed solution method for unsymmetrical case performs factorization processes symmetrically on the upper and lower triangular portion of matrix, which differs from previous work based on general unsymmetrical process, and attains higher performance. It is shown that the solution algorithm for USLS can be simply derived from the existing approaches for the symmetrical case. The new matrix factorization algorithm in our method can be implemented easily by modifying a standard JKI symmetrical matrix factorization code. Multi-blocked out-of-core strategies were also developed to expand the solution scale. The approach convincingly increases the speed of the solution process, which is demonstrated with the numerical tests.
基金Project supported by the Research Fund for the Doctoral Program of Higher Education (No.20030001112).
文摘In the previous papers, a high performance sparse static solver with two-level unrolling based on a cell-sparse storage scheme was reported. Although the solver reaches quite a high efficiency for a big percentage of finite element analysis benchmark tests, the MFLOPS (million floating operations per second) of LDL^T factorization of benchmark tests vary on a Dell Pentium IV 850 MHz machine from 100 to 456 depending on the average size of the super-equations, i.e., on the average depth of unrolling. In this paper, a new sparse static solver with two-level unrolling that employs the concept of master-equations and searches for an appropriate depths of unrolling is proposed. The new solver provides higher MFLOPS for LDL^T factorization of benchmark tests, and therefore speeds up the solution process.
文摘Aeroacoustic performance of fans is essential due to their widespread application. Therefore, the original aim of this paper is to evaluate the generated noise owing to different geometric parameters. In current study, effect of five geometric parameters was investigated on well performance of a Bladeless fan. Airflow through this fan was analyzed simulating a Bladeless fan within a 2 m×2 m×4 m room. Analysis of the flow field inside the fan and evaluating its performance were obtained by solving conservations of mass and momentum equations for aerodynamic investigations and FW-H noise equations for aeroacoustic analysis. In order to design Bladeless fan Eppler 473 airfoil profile was used as the cross section of this fan. Five distinct parameters, namely height of cross section of the fan, outlet angle of the flow relative to the fan axis, thickness of airflow outlet slit, hydraulic diameter and aspect ratio for circular and quadratic cross sections were considered. Validating acoustic code results, we compared numerical solution of FW-H noise equations for NACA0012 with experimental results. FW-H model was selected to predict the noise generated by the Bladeless fan as the numerical results indicated a good agreement with experimental ones for NACA0012. To validate 3-D numerical results, the experimental results of a round jet showed good agreement with those simulation data. In order to indicate the effect of each mentioned parameter on the fan performance, SPL and OASPL diagrams were illustrated.
文摘This paper proposes algorithm for Increasing Virtual Machine Security Strategy in Cloud Computing computations.Imbalance between load and energy has been one of the disadvantages of old methods in providing server and hosting,so that if two virtual severs be active on a host and energy load be more on a host,it would allocated the energy of other hosts(virtual host)to itself to stay steady and this option usually leads to hardware overflow errors and users dissatisfaction.This problem has been removed in methods based on cloud processing but not perfectly,therefore,providing an algorithm not only will implement a suitable security background but also it will suitably divide energy consumption and load balancing among virtual severs.The proposed algorithm is compared with several previously proposed Security Strategy including SC-PSSF,PSSF and DEEAC.Comparisons show that the proposed method offers high performance computing,efficiency and consumes lower energy in the network.
文摘Today the PC class machines are quite popular for HPC area, especially on the problemsthat require the good cost/performance ratios. One of the drawback of these machines is the poormemory throughput performance. And one of the reasons of the poor performance is depend on the lack of the mapping capability of the TLB which is a buffer to accelerate the virtual memory access. In this report, I present that the mapping capability and the performance can be improved with the multi granularity TLB feature that some processors have. And I also present that the new TLB handling routine can be incorporated into the demand paging system of Linux.
基金supported in part by National 863 Program (2009AA01Z256,No.2009AA01A345)National 973 Program (2007CB310705)the NSFC (60932004),P.R.China
文摘This paper analyzes the physical potential, computing performance benefi t and power consumption of optical interconnects. Compared with electrical interconnections, optical ones show undoubted advantages based on physical factor analysis. At the same time, since the recent developments drive us to think about whether these optical interconnect technologies with higher bandwidth but higher cost are worthy to be deployed, the computing performance comparison is performed. To meet the increasing demand of large-scale parallel or multi-processor computing tasks, an analytic method to evaluate parallel computing performance ofinterconnect systems is proposed in this paper. Both bandwidth-limit model and full-bandwidth model are under our investigation. Speedup and effi ciency are selected to represent the parallel performance of an interconnect system. Deploying the proposed models, we depict the performance gap between the optical and electrically interconnected systems. Another investigation on power consumption of commercial products showed that if the parallel interconnections are deployed, the unit power consumption will be reduced. Therefore, from the analysis of computing influence and power dissipation, we found that parallel optical interconnect is valuable combination of high performance and low energy consumption. Considering the possible data center under construction, huge power could be saved if parallel optical interconnects technologies are used.
基金G.F.acknowledges CINECA computational grant ISCRA-B IsB17–SPONGES,no.HP10B9ZOKQ and,partially,the support of PRIN projects CUP E82F16003010006(principal investigator,G.F.for the Tor Vergata Research Unit)and CUP E84I19001020006(principal investigator,G.Bella)support from the European Research Council under the Horizon 2020 Programme advanced grant agreement no.739964(‘COPMAT’)M.P.acknowledges the support of the National Science Foundation under grant no.CMMI 1901697.
文摘We detail some of the understudied aspects of the flow inside and around the Hexactinellid Sponge Euplectella aspergillum.By leveraging the flexibility of the Lattice Boltzmann Method,High Performance Computing simulations are performed to dissect the complex conditions corresponding to the actual environment at the bottom of the ocean,at depths between 100 and 1,000 m.These large-scale simulations unveil potential clues on the evolutionary adaptations of these deep-sea sponges in response to the surrounding fluid flow,and they open the path to future investigations at the interface between physics,engineering and biology.
文摘The integration of clusters,grids,clouds,edges and other computing platforms result in contemporary technology of jungle computing.This novel technique has the aptitude to tackle high performance computation systems and it manages the usage of all computing platforms at a time.Federated learning is a collaborative machine learning approach without centralized training data.The proposed system effectively detects the intrusion attack without human intervention and subsequently detects anomalous deviations in device communication behavior,potentially caused by malicious adversaries and it can emerge with new and unknown attacks.The main objective is to learn overall behavior of an intruder while performing attacks to the assumed target service.Moreover,the updated system model is send to the centralized server in jungle computing,to detect their pattern.Federated learning greatly helps the machine to study the type of attack from each device and this technique paves a way to complete dominion over all malicious behaviors.In our proposed work,we have implemented an intrusion detection system that has high accuracy,low False Positive Rate(FPR)scalable,and versatile for the jungle computing environment.The execution time taken to complete a round is less than two seconds,with an accuracy rate of 96%.
基金"This paper is an extended version of "SpotMPl: a framework for auction-based HPC computing using amazon spot instances" published in the International Symposium on Advances of Distributed Computing and Networking (ADCN 2011).Acknowledgment This research is supported in part by the National Science Foundation grant CNS 0958854 and educational resource grants from Amazon.com.
文摘Cloud computing is expanding widely in the world of IT infrastructure. This is due partly to the cost-saving effect of economies of scale. Fair market conditions can in theory provide a healthy environment to reflect the most reasonable costs of computations. While fixed cloud pricing provides an attractive low entry barrier for compute-intensive applications, both the consumer and supplier of computing resources can see high efficiency for their investments by participating in auction-based exchanges. There are huge incentives for the cloud provider to offer auctioned resources. However, from the consumer perspective, using these resources is a sparsely discussed challenge. This paper reports a methodology and framework designed to address the challenges of using HPC (High Performance Computing) applications on auction-based cloud clusters. The authors focus on HPC applications and describe a method for determining bid-aware checkpointing intervals. They extend a theoretical model for determining checkpoint intervals using statistical analysis of pricing histories. Also the latest developments in the SpotHPC framework are introduced which aim at facilitating the managed execution of real MPI applications on auction-based cloud environments. The authors use their model to simulate a set of algorithms with different computing and communication densities. The results show the complex interactions between optimal bidding strategies and parallel applications performance.
基金the support from U.S.Department of Energy through its Advanced Grid Modeling program,Exascale Computing Program(ECP)The Grid Modernization Laboratory Consortium(GMLC)+1 种基金Advanced Research Projects Agency-Energy(ARPA-E),The National Quantum Information Science Research Centers,Co-design Center for Quantum Advantage(C2QA)the Office of Advanced Scientific Computing Research(ASCR).
文摘With the global trend of pursuing clean energy and decarbonization,power systems have been evolving in a fast pace that we have never seen in the history of electrification.This evolution makes the power system more dynamic and more distributed,with higher uncertainty.These new power system behaviors bring significant challenges in power system modeling and simulation as more data need to be analyzed for larger systems and more complex models to be solved in a shorter time period.The conventional computing approaches will not be sufficient for future power systems.This paper provides a historical review of computing for power system operation and planning,discusses technology advancements in high performance computing(HPC),and describes the drivers for employing HPC techniques.Some high performance computing application examples with different HPC techniques,including the latest quantum computing,are also presented to show how HPC techniques can help us be well prepared to meet the requirements of power system computing in a clean energy future.
文摘In recent years, the widespread adoption of parallel computing, especially in multi-core processors and high-performance computing environments, ushered in a new era of efficiency and speed. This trend was particularly noteworthy in the field of image processing, which witnessed significant advancements. This parallel computing project explored the field of parallel image processing, with a focus on the grayscale conversion of colorful images. Our approach involved integrating OpenMP into our framework for parallelization to execute a critical image processing task: grayscale conversion. By using OpenMP, we strategically enhanced the overall performance of the conversion process by distributing the workload across multiple threads. The primary objectives of our project revolved around optimizing computation time and improving overall efficiency, particularly in the task of grayscale conversion of colorful images. Utilizing OpenMP for concurrent processing across multiple cores significantly reduced execution times through the effective distribution of tasks among these cores. The speedup values for various image sizes highlighted the efficacy of parallel processing, especially for large images. However, a detailed examination revealed a potential decline in parallelization efficiency with an increasing number of cores. This underscored the importance of a carefully optimized parallelization strategy, considering factors like load balancing and minimizing communication overhead. Despite challenges, the overall scalability and efficiency achieved with parallel image processing underscored OpenMP’s effectiveness in accelerating image manipulation tasks.
基金funded by the Defense Advanced Research Project Agency(DARPA)Low Temperature Logic Technology(LTLT)program.
文摘Low temperature complementary metal oxide semiconductor(CMOS)or cryogenic CMOS is a promising avenue for the continuation of Moore’s law while serving the needs of high performance computing.With temperature as a control“knob”to steepen the subthreshold slope behavior of CMOS devices,the supply voltage of operation can be reduced with no impact on operating speed.With the optimal threshold voltage engineering,the device ON current can be further enhanced,translating to higher performance.In this article,the experimentally calibrated data was adopted to tune the threshold voltage and investigated the power performance area of cryogenic CMOS at device,circuit and system level.We also presented results from measurement and analysis of functional memory chips fabricated in 28 nm bulk CMOS and 22 nm fully depleted silicon on insulator(FDSOI)operating at cryogenic temperature.Finally,the challenges and opportunities in the further development and deployment of such systems were discussed.
基金support via the NSF grants NSF-19-04774,NSF-AST-2009776,NASA-2020-1241the NASA grant 80NSSC22K0628。
文摘GPU computing is expected to play an integral part in all modern Exascale supercomputers.It is also expected that higher order Godunov schemes will make up about a significant fraction of the application mix on such supercomputers.It is,therefore,very important to prepare the community of users of higher order schemes for hyperbolic PDEs for this emerging opportunity.Not every algorithm that is used in the space-time update of the solution of hyperbolic PDEs will take well to GPUs.However,we identify a small core of algorithms that take exceptionally well to GPU computing.Based on an analysis of available options,we have been able to identify weighted essentially non-oscillatory(WENO)algorithms for spatial reconstruction along with arbitrary derivative(ADER)algorithms for time extension followed by a corrector step as the winning three-part algorithmic combination.Even when a winning subset of algorithms has been identified,it is not clear that they will port seamlessly to GPUs.The low data throughput between CPU and GPU,as well as the very small cache sizes on modern GPUs,implies that we have to think through all aspects of the task of porting an application to GPUs.For that reason,this paper identifies the techniques and tricks needed for making a successful port of this very useful class of higher order algorithms to GPUs.Application codes face a further challenge—the GPU results need to be practically indistinguishable from the CPU results—in order for the legacy knowledge bases embedded in these applications codes to be preserved during the port of GPUs.This requirement often makes a complete code rewrite impossible.For that reason,it is safest to use an approach based on OpenACC directives,so that most of the code remains intact(as long as it was originally well-written).This paper is intended to be a one-stop shop for anyone seeking to make an OpenACC-based port of a higher order Godunov scheme to GPUs.We focus on three broad and high-impact areas where higher order Godunov schemes are used.The first area is computational fluid dynamics(CFD).The second is computational magnetohydrodynamics(MHD)which has an involution constraint that has to be mimetically preserved.The third is computational electrodynamics(CED)which has involution constraints and also extremely stiff source terms.Together,these three diverse uses of higher order Godunov methodology,cover many of the most important applications areas.In all three cases,we show that the optimal use of algorithms,techniques,and tricks,along with the use of OpenACC,yields superlative speedups on GPUs.As a bonus,we find a most remarkable and desirable result:some higher order schemes,with their larger operations count per zone,show better speedup than lower order schemes on GPUs.In other words,the GPU is an optimal stratagem for overcoming the higher computational complexities of higher order schemes.Several avenues for future improvement have also been identified.A scalability study is presented for a real-world application using GPUs and comparable numbers of high-end multicore CPUs.It is found that GPUs offer a substantial performance benefit over comparable number of CPUs,especially when all the methods designed in this paper are used.
基金This research was funded by the Zhejiang’JIANBING’R&D Project(No.2022C01055)Zhejiang Provincial Department of Transport Technology Project(No.2024011).
文摘To improve the efficiency of evolutionary algorithms(EAs)for solving complex problems with large populations,this paper proposes a scalable parallel evolution optimization(SPEO)framework with an elastic asynchronous migration(EAM)mechanism.SPEO addresses two main challenges that arise in large-scale parallel EAs:(1)heavy communication workload from extensive information exchange across numerous processors,which reduces computational efficiency,and(2)loss of population diversity due to similar solutions generated and shared by many processors.The EAM mechanism introduces a self-adaptive communication scheme to mitigate communication overhead,while a diversity-preserving buffer helps maintain diversity by filtering similar solutions.Experimental results on eight CEC2014 benchmark functions using up to 512 CPU cores on the Australian National Computational Infrastructure(NCI)platform demonstrate that SPEO not only scales efficiently with an increasing number of processors but also achieves improved solution quality compared to state-of-the-art island-based EAs.
基金supported by the Natural Science Foundation of Hunan Province(No.2022JJ10066)the National Natural Science Foundation of China(Grant No.62272477).
文摘1 Introduction The perpetual demand for computational power in scientific computing incessantly propels high-performance computing(HPC)systems toward Zettascale computation and even beyond[1,2].Concurrently,the ascension of artificial intelligence(AI)has engendered a marked surge in computational requisites,doubling the required computation performance by every 3.4 months[3].The collective pursuit of ultra-high computational capability has positioned AI and scientific computing as the preeminent twin drivers of HPC.
基金partly supported by the Supercomputer Application Project Trail Funding from Wuxi Jiangnan Institute of Computing Technology(BB2340000016)the Strategic Priority Research Program of Chinese Academy of Sciences(XDC01040100)+6 种基金the National Natural Science Foundation of China(21688102,21803066)the Anhui Initiative in Quantum Information Technologies(AHY090400)the National Key Research and Development Program of China(2016YFA0200604)the Fundamental Research Funds for Central Universities(WK2340000091)the Chinese Academy of Sciences Pioneer Hundred Talents Program(KJ2340000031)the Research Start-Up Grants(KY2340000094)the Academic Leading Talents Training Program(KY2340000103)from University of Science and Technology of China。
文摘High performance computing(HPC)is a powerful tool to accelerate the Kohn–Sham density functional theory(KS-DFT)calculations on modern heterogeneous supercomputers.Here,we describe a massively parallel implementation of discontinuous Galerkin density functional theory(DGDFT)method on the Sunway Taihu Light supercomputer.The DGDFT method uses the adaptive local basis(ALB)functions generated on-the-fly during the self-consistent field(SCF)iteration to solve the KS equations with high precision comparable to plane-wave basis set.In particular,the DGDFT method adopts a two-level parallelization strategy that deals with various types of data distribution,task scheduling,and data communication schemes,and combines with the master–slave multi-thread heterogeneous parallelism of SW26010 processor,resulting in large-scale HPC KS-DFT calculations on the Sunway Taihu Light supercomputer.We show that the DGDFT method can scale up to 8,519,680 processing cores(131,072 core groups)on the Sunway Taihu Light supercomputer for studying the electronic structures of twodimensional(2 D)metallic graphene systems that contain tens of thousands of carbon atoms.