Large language models(LLMs)have emerged as powerful tools for addressing a wide range of problems,including those in scientific computing,particularly in solving partial differential equations(PDEs).However,different ...Large language models(LLMs)have emerged as powerful tools for addressing a wide range of problems,including those in scientific computing,particularly in solving partial differential equations(PDEs).However,different models exhibit distinct strengths and preferences,resulting in varying levels of performance.In this paper,we compare the capabilities of the most advanced LLMs—DeepSeek,ChatGPT,and Claude—along with their reasoning-optimized versions in addressing computational challenges.Specifically,we evaluate their proficiency in solving traditional numerical problems in scientific computing as well as leveraging scientific machine learning techniques for PDE-based problems.We designed all our experiments so that a nontrivial decision is required,e.g,defining the proper space of input functions for neural operator learning.Our findings show that reasoning and hybrid-reasoning models consistently and significantly outperform non-reasoning ones in solving challenging problems,with ChatGPT o3-mini-high generally offering the fastest reasoning speed.展开更多
Peta-scale high-perfomlance computing systems are increasingly built with heterogeneous CPU and GPU nodes to achieve higher power efficiency and computation throughput. While providing unprecedented capabilities to co...Peta-scale high-perfomlance computing systems are increasingly built with heterogeneous CPU and GPU nodes to achieve higher power efficiency and computation throughput. While providing unprecedented capabilities to conduct computational experiments of historic significance, these systems are presently difficult to program. The users, who are domain experts rather than computer experts, prefer to use programming models closer to their domains (e.g., physics and biology) rather than MPI and OpenME This has led the development of domain-specific programming that provides domain-specific programming interfaces but abstracts away some performance-critical architecture details. Based on experience in designing large-scale computing systems, a hybrid programming framework for scientific computing on heterogeneous architectures is proposed in this work. Its design philosophy is to provide a collaborative mechanism for domain experts and computer experts so that both domain-specific knowledge and performance-critical architecture details can be adequately exploited. Two real-world scientific applications have been evaluated on TH-IA, a peta-scale CPU-GPU heterogeneous system that is currently the 5th fastest supercomputer in the world. The experimental results show that the proposed framework is well suited for developing large-scale scientific computing applications on peta-scale heterogeneous CPU/GPU systems.展开更多
Scientific computing libraries,whether in-house or open-source,have witnessed enormous progress in both engineering and scientific research.Therefore,it is important to ensure that modifications to the source code,pro...Scientific computing libraries,whether in-house or open-source,have witnessed enormous progress in both engineering and scientific research.Therefore,it is important to ensure that modifications to the source code,prompted by bug fixing or new feature development,do not compromise the accuracy and functionality that have been already validated and verified.This paper introduces a method for establishing and implementing an automatic regression test environment,using the open-source multi-physics library SPHinXsys as an illustrative example.Initially,a reference database for each benchmark test is generated from observed data across multiple executions.This comprehensive database encapsulates the maximum variation range of metrics for different strategies,including the time-averaged,ensemble-averaged,and dynamic time warping methods.It accounts for uncertainties arising from parallel computing,particle relaxation,physical instabilities,and more.Subsequently,new results obtained after source code modifications undergo testing based on a curve-similarity comparison against the reference database.Whenever the source code is updated,the regression test is automatically executed for all test cases,providing a comprehensive assessment of the validity of the current results.This regression test environment has been successfully implemented in all dynamic test cases within SPHinXsys,including fluid dynamics,solid mechanics,fluid-structure interaction,thermal and mass diffusion,reaction-diffusion,and their multi-physics couplings,and demonstrates robust capabilities in testing different problems.It is noted that while the current test environment is built and implemented for a particular scientific computing library,its underlying principles are generic and can be easily adapted for use with other libraries,achieving equal effectiveness.展开更多
This paper presents the mathematical analysis of the dynamical system for avian influenza.The proposed model considers a nonlinear dynamical model of birds and human.The half-saturated incidence rate is used for the t...This paper presents the mathematical analysis of the dynamical system for avian influenza.The proposed model considers a nonlinear dynamical model of birds and human.The half-saturated incidence rate is used for the transmission of avian influenza infection.Rigorous mathematical results are presented for the proposed models.The local and global dynamics of each model are presented and proven that when R0<1,then the disease-free equilibrium of each model is stable both locally and globally,and when R0>1,then the endemic equilibrium is stable both locally and globally.The numerical results obtained for the proposed model shows that influenza could be eliminated from the community if the threshold is not greater than unity.展开更多
The rise of scientific computing was one of the most important advances in the S&T progress during the second half of the 20th century. Parallel with theoretical exploration and scientific experiments,scientific c...The rise of scientific computing was one of the most important advances in the S&T progress during the second half of the 20th century. Parallel with theoretical exploration and scientific experiments,scientific computing has become the 'third means' for scientific activities in the world today. The article gives a panoramic review of the subject during the past 50 years in China and lists the contributions made by Chinese scientists in this field. In addition, it reveals some key contents of related projects in the national research plan and looks into the development vista for the subject in China at the dawning years of the new century.展开更多
Neural dynamics is a powerful tool to solve online optimization problems and has been used in many applications.However,some problems cannot be modelled as a single objective optimization and neural dynamics method do...Neural dynamics is a powerful tool to solve online optimization problems and has been used in many applications.However,some problems cannot be modelled as a single objective optimization and neural dynamics method does not apply.This paper proposes the first neural dynamics model to solve bi-objective constrained quadratic program,which opens the avenue to extend the power of neural dynamics to multi-objective optimization.We rigorously prove that the designed neural dynamics is globally convergent and it converges to the optimal solution of the bi-objective optimization in Pareto sense.Illustrative examples on bi-objective geometric optimization are used to verify the correctness of the proposed method.The developed model is also tested in scientific computing with data from real industrial data with demonstrated superior to rival schemes.展开更多
Our primary research hypothesis stands on a simple idea:The evolution of top-rated publications on a particular theme depends heavily on the progress and maturity of related topics.And this even when there are no clea...Our primary research hypothesis stands on a simple idea:The evolution of top-rated publications on a particular theme depends heavily on the progress and maturity of related topics.And this even when there are no clear relations or some concepts appear to cease to exist and leave place for newer ones starting many years ago.We implemented our model based on Computer Science Ontology(CSO)and analyzed 44 years of publications.Then we derived the most important concepts related to Cloud Computing(CC)from the scientific collection offered by Clarivate Analytics.Our methodology includes data extraction using advanced web crawling techniques,data preparation,statistical data analysis,and graphical representations.We obtained related concepts after aggregating the scores using the Jaccard coefficient and CSO Ontology.Our article reveals the contribution of Cloud Computing topics in research papers in leading scientific journals and the relationships between the field of Cloud Computing and the interdependent subdivisions identified in the broader framework of Computer Science.展开更多
The rapid rise of artificial intelligence(AI)has catalyzed advancements across various trades and professions.Developing large-scale AI models is now widely regarded as one of the most viable approaches to achieving g...The rapid rise of artificial intelligence(AI)has catalyzed advancements across various trades and professions.Developing large-scale AI models is now widely regarded as one of the most viable approaches to achieving general-purpose intelligent agents.This pressing demand has made the development of more advanced computing accelerators an enduring goal for the rapid realization of large-scale AI models.However,as transistor scaling approaches physical limits,traditional digital electronic accelerators based on the von Neumann architecture face significant bottlenecks in energy consumption and latency.Optical computing accelerators,leveraging the high bandwidth,low latency,low heat dissipation,and high parallelism of optical devices and transmission over waveguides or free space,offer promising potential to overcome these challenges.In this paper,inspired by the generic architectures of digital electronic accelerators,we conduct a bottom-up review of the principles and applications of optical computing accelerators based on the basic element of computing accelerators–the multiply-accumulate(MAC)unit.Then,we describe how to solve matrix multiplication by composing calculator arrays from different MAC units in diverse architectures,followed by a discussion on the two main applications where optical computing accelerators are reported to have advantages over electronic computing.Finally,the challenges of optical computing and our perspective on its future development are presented.Moreover,we also survey the current state of optical computing in the industry and provide insights into the future commercialization of optical computing.展开更多
A web service wrapping approach for command line programs,which are commonly used in scientific computing,is proposed.First,software architecture for a basic web service wrapper implementation is given and the functio...A web service wrapping approach for command line programs,which are commonly used in scientific computing,is proposed.First,software architecture for a basic web service wrapper implementation is given and the functions of the main components are explained.Then after a comprehensive analysis of data transmission and a job life cycle model,a novel proactive file transmission and job management mechanism is devised to enhance the software architecture,and the command line programs are wrapped into web services in such a way that they can efficiently transmit files,supply instant status feedback and automatically manage the jobs.Experiments show that the proposed approach achieves higher performance with less memory usage compared to the related work, and the usability is also improved.This work has already been put into use in a production system of scientific computing and the data processing efficiency of the system is greatly improved.展开更多
In this paper, we adopt cloud computing in a specific scientific computing field for its virtualization, distribution and dynamic extendibility as follows: We obtain high-energy parabolic self-similar pulses by numeri...In this paper, we adopt cloud computing in a specific scientific computing field for its virtualization, distribution and dynamic extendibility as follows: We obtain high-energy parabolic self-similar pulses by numerical simulation using our non-distributed passively mode-locked Er-doped fiber laser model. For researching characteristics of these wave-breaking-free self-similar pulses, chirp of them must be extracted. We propose several time-frequency analysis methods adopted in chirp extraction of ultra-short optical pulses for the first time and discuss the advantages and disadvantages of them in this particular application.展开更多
We present an efficient deep learning method called coupled deep neural networks(CDNNs) for coupling of the Stokes and Darcy–Forchheimer problems. Our method compiles the interface conditions of the coupled problems ...We present an efficient deep learning method called coupled deep neural networks(CDNNs) for coupling of the Stokes and Darcy–Forchheimer problems. Our method compiles the interface conditions of the coupled problems into the networks properly and can be served as an efficient alternative to the complex coupled problems. To impose energy conservation constraints, the CDNNs utilize simple fully connected layers and a custom loss function to perform the model training process as well as the physical property of the exact solution. The approach can be beneficial for the following reasons: Firstly, we sample randomly and only input spatial coordinates without being restricted by the nature of samples.Secondly, our method is meshfree, which makes it more efficient than the traditional methods. Finally, the method is parallel and can solve multiple variables independently at the same time. We present the theoretical results to guarantee the convergence of the loss function and the convergence of the neural networks to the exact solution. Some numerical experiments are performed and discussed to demonstrate performance of the proposed method.展开更多
In order to realize visualization of three-dimensional data field (TDDF) in instrument, two methods of visualization of TDDF and the usual manner of quick graphic and image processing are analyzed. And how to use Op...In order to realize visualization of three-dimensional data field (TDDF) in instrument, two methods of visualization of TDDF and the usual manner of quick graphic and image processing are analyzed. And how to use OpenGL technique and the characteristic of analyzed data to construct a TDDF, the ways of reality processing and interactive processing are described. Then the medium geometric element and a related realistic model are constructed by means of the first algorithm. Models obtained for attaching the third dimension in three-dimensional data field are presented. An example for TDDF realization of machine measuring is provided. The analysis of resultant graphic indicates that the three-dimensional graphics built by the method developed is featured by good reality, fast processing and strong interaction展开更多
Radial Basis Function methods for scattered data interpolation and for the numerical solution of PDEs were originally implemented in a global manner. Subsequently, it was realized that the methods could be implemented...Radial Basis Function methods for scattered data interpolation and for the numerical solution of PDEs were originally implemented in a global manner. Subsequently, it was realized that the methods could be implemented more efficiently in a local manner and that the local approaches could match or even surpass the accuracy of the global implementations. In this work, three localization approaches are compared: a local RBF method, a partition of unity method, and a recently introduced modified partition of unity method. A simple shape parameter selection method is introduced and the application of artificial viscosity to stabilize each of the local methods when approximating time-dependent PDEs is reviewed. Additionally, a new type of quasi-random center is introduced which may be better choices than other quasi-random points that are commonly used with RBF methods. All the results within the manuscript are reproducible as they are included as examples in the freely available Python Radial Basis Function Toolbox.展开更多
In this paper, parallel library, portable extensible toolkit for scientific computation (FETSc), 18 used to solve linear systems in soil-water coupled finite element method (FEM) for geotechnical problems. The par...In this paper, parallel library, portable extensible toolkit for scientific computation (FETSc), 18 used to solve linear systems in soil-water coupled finite element method (FEM) for geotechnical problems. The parallel environment is integrated into GLEAVES, which is a geotechnical software package used for the finite elementsimulation. The linear system Ax = b which is a fundamental and the most time-consuming part of the FEM is solved with iterative solvers in PETSc. In order to find a robust and effective combination of iterative solvers and corresponding preconditioners for the soil-water coupled problems, performance evaluations on Krylov subspace methods and four preconditioners are carried out. The results indicate that general minimal residual (GMRES) method coupled with preconditioners can provide an effective solution. The application to a construction project is presented to illustrate the potential of the proposed solution.展开更多
It briefly describes the techniques of Visualization in Scientific Computation (ViSC). Combining Open GL, a 3D graphic library, we discuss and analyze some visualization techniques in electromagnetic engineering.
1 Introduction The perpetual demand for computational power in scientific computing incessantly propels high-performance computing(HPC)systems toward Zettascale computation and even beyond[1,2].Concurrently,the ascens...1 Introduction The perpetual demand for computational power in scientific computing incessantly propels high-performance computing(HPC)systems toward Zettascale computation and even beyond[1,2].Concurrently,the ascension of artificial intelligence(AI)has engendered a marked surge in computational requisites,doubling the required computation performance by every 3.4 months[3].The collective pursuit of ultra-high computational capability has positioned AI and scientific computing as the preeminent twin drivers of HPC.展开更多
Aworld-renowned expert on non-conforming finite elementmethods,Prof.Zhong-Ci Shi,had left a lasting impact on the overarching development of Chinese computational mathematics and scientific computing as well as comput...Aworld-renowned expert on non-conforming finite elementmethods,Prof.Zhong-Ci Shi,had left a lasting impact on the overarching development of Chinese computational mathematics and scientific computing as well as computational mathematics worldwide through his tireless and visionary leadership.His dedication to developing academic programs in many leading Chinese universities and institutions,along with his stewardship in establishing national computational research programs,played a fundamental role in elevating Chinese computational research status to a global recognition.Several generations of Chinese scholars benefited from his guidance and teaching,and this focused issue of original research contributions from 33 Chinese and international scientist teams bears witness to the profound influence of Prof.Shi on their research careers by his lifetime of work spanning over six decades.Beyond being an accomplished mathematician,Prof.Shi had lived a life of a cultured man who radiated vigor and warmth,imbuing his life with heartiness,openness,and honesty.展开更多
With the rapid growth of computer science and big data,the traditional von Neumann architecture suffers the aggravating data communication costs due to the separated structure of the processing units and memories.Memr...With the rapid growth of computer science and big data,the traditional von Neumann architecture suffers the aggravating data communication costs due to the separated structure of the processing units and memories.Memristive in-memory computing paradigm is considered as a prominent candidate to address these issues,and plentiful applications have been demonstrated and verified.These applications can be broadly categorized into two major types:soft computing that can tolerant uncertain and imprecise results,and hard computing that emphasizes explicit and precise numerical results for each task,leading to different requirements on the computational accuracies and the corresponding hardware solutions.In this review,we conduct a thorough survey of the recent advances of memristive in-memory computing applications,both on the soft computing type that focuses on artificial neural networks and other machine learning algorithms,and the hard computing type that includes scientific computing and digital image processing.At the end of the review,we discuss the remaining challenges and future opportunities of memristive in-memory computing in the incoming Artificial Intelligence of Things era.展开更多
The "Large Scale Scientific Computation (LSSC) Research"project is one of the State Major Basic Research projects funded by the Chinese Ministry of Science and Technology in the field ofinformation scien... The "Large Scale Scientific Computation (LSSC) Research"project is one of the State Major Basic Research projects funded by the Chinese Ministry of Science and Technology in the field ofinformation science and technology.……展开更多
基金supported by the ONR Vannevar Bush Faculty Fellowship(Grant No.N00014-22-1-2795).
文摘Large language models(LLMs)have emerged as powerful tools for addressing a wide range of problems,including those in scientific computing,particularly in solving partial differential equations(PDEs).However,different models exhibit distinct strengths and preferences,resulting in varying levels of performance.In this paper,we compare the capabilities of the most advanced LLMs—DeepSeek,ChatGPT,and Claude—along with their reasoning-optimized versions in addressing computational challenges.Specifically,we evaluate their proficiency in solving traditional numerical problems in scientific computing as well as leveraging scientific machine learning techniques for PDE-based problems.We designed all our experiments so that a nontrivial decision is required,e.g,defining the proper space of input functions for neural operator learning.Our findings show that reasoning and hybrid-reasoning models consistently and significantly outperform non-reasoning ones in solving challenging problems,with ChatGPT o3-mini-high generally offering the fastest reasoning speed.
基金Project(61170049) supported by the National Natural Science Foundation of ChinaProject(2012AA010903) supported by the National High Technology Research and Development Program of China
文摘Peta-scale high-perfomlance computing systems are increasingly built with heterogeneous CPU and GPU nodes to achieve higher power efficiency and computation throughput. While providing unprecedented capabilities to conduct computational experiments of historic significance, these systems are presently difficult to program. The users, who are domain experts rather than computer experts, prefer to use programming models closer to their domains (e.g., physics and biology) rather than MPI and OpenME This has led the development of domain-specific programming that provides domain-specific programming interfaces but abstracts away some performance-critical architecture details. Based on experience in designing large-scale computing systems, a hybrid programming framework for scientific computing on heterogeneous architectures is proposed in this work. Its design philosophy is to provide a collaborative mechanism for domain experts and computer experts so that both domain-specific knowledge and performance-critical architecture details can be adequately exploited. Two real-world scientific applications have been evaluated on TH-IA, a peta-scale CPU-GPU heterogeneous system that is currently the 5th fastest supercomputer in the world. The experimental results show that the proposed framework is well suited for developing large-scale scientific computing applications on peta-scale heterogeneous CPU/GPU systems.
基金supported by the China Scholarship Council(Grant No.202006230071)the Deutsche Forschungsgemeinschaft(DFG)(Grant No.DFG HU1527/12-4).
文摘Scientific computing libraries,whether in-house or open-source,have witnessed enormous progress in both engineering and scientific research.Therefore,it is important to ensure that modifications to the source code,prompted by bug fixing or new feature development,do not compromise the accuracy and functionality that have been already validated and verified.This paper introduces a method for establishing and implementing an automatic regression test environment,using the open-source multi-physics library SPHinXsys as an illustrative example.Initially,a reference database for each benchmark test is generated from observed data across multiple executions.This comprehensive database encapsulates the maximum variation range of metrics for different strategies,including the time-averaged,ensemble-averaged,and dynamic time warping methods.It accounts for uncertainties arising from parallel computing,particle relaxation,physical instabilities,and more.Subsequently,new results obtained after source code modifications undergo testing based on a curve-similarity comparison against the reference database.Whenever the source code is updated,the regression test is automatically executed for all test cases,providing a comprehensive assessment of the validity of the current results.This regression test environment has been successfully implemented in all dynamic test cases within SPHinXsys,including fluid dynamics,solid mechanics,fluid-structure interaction,thermal and mass diffusion,reaction-diffusion,and their multi-physics couplings,and demonstrates robust capabilities in testing different problems.It is noted that while the current test environment is built and implemented for a particular scientific computing library,its underlying principles are generic and can be easily adapted for use with other libraries,achieving equal effectiveness.
基金The corresponding authors extend their appreciation to the Deanship of Scientific Research,University of Hafr Al Batin for funding this work through the research group project no.(G-108-2020).
文摘This paper presents the mathematical analysis of the dynamical system for avian influenza.The proposed model considers a nonlinear dynamical model of birds and human.The half-saturated incidence rate is used for the transmission of avian influenza infection.Rigorous mathematical results are presented for the proposed models.The local and global dynamics of each model are presented and proven that when R0<1,then the disease-free equilibrium of each model is stable both locally and globally,and when R0>1,then the endemic equilibrium is stable both locally and globally.The numerical results obtained for the proposed model shows that influenza could be eliminated from the community if the threshold is not greater than unity.
文摘The rise of scientific computing was one of the most important advances in the S&T progress during the second half of the 20th century. Parallel with theoretical exploration and scientific experiments,scientific computing has become the 'third means' for scientific activities in the world today. The article gives a panoramic review of the subject during the past 50 years in China and lists the contributions made by Chinese scientists in this field. In addition, it reveals some key contents of related projects in the national research plan and looks into the development vista for the subject in China at the dawning years of the new century.
基金supported by the National Natural Science Foundation of China(No.62466019).
文摘Neural dynamics is a powerful tool to solve online optimization problems and has been used in many applications.However,some problems cannot be modelled as a single objective optimization and neural dynamics method does not apply.This paper proposes the first neural dynamics model to solve bi-objective constrained quadratic program,which opens the avenue to extend the power of neural dynamics to multi-objective optimization.We rigorously prove that the designed neural dynamics is globally convergent and it converges to the optimal solution of the bi-objective optimization in Pareto sense.Illustrative examples on bi-objective geometric optimization are used to verify the correctness of the proposed method.The developed model is also tested in scientific computing with data from real industrial data with demonstrated superior to rival schemes.
基金Pawel Lula’s participation in the research has been carried out as part of a research initiative financed by Ministry of Science and Higher Education within“Regional Initiative of Excellence”Programme for 2019-2022.Project no.:021/RID/2018/19.Total financing 11897131.40 PLN.The other authors received no specific funding for this study.
文摘Our primary research hypothesis stands on a simple idea:The evolution of top-rated publications on a particular theme depends heavily on the progress and maturity of related topics.And this even when there are no clear relations or some concepts appear to cease to exist and leave place for newer ones starting many years ago.We implemented our model based on Computer Science Ontology(CSO)and analyzed 44 years of publications.Then we derived the most important concepts related to Cloud Computing(CC)from the scientific collection offered by Clarivate Analytics.Our methodology includes data extraction using advanced web crawling techniques,data preparation,statistical data analysis,and graphical representations.We obtained related concepts after aggregating the scores using the Jaccard coefficient and CSO Ontology.Our article reveals the contribution of Cloud Computing topics in research papers in leading scientific journals and the relationships between the field of Cloud Computing and the interdependent subdivisions identified in the broader framework of Computer Science.
基金supported by Shanghai Municipal Science and Technology Major Project.
文摘The rapid rise of artificial intelligence(AI)has catalyzed advancements across various trades and professions.Developing large-scale AI models is now widely regarded as one of the most viable approaches to achieving general-purpose intelligent agents.This pressing demand has made the development of more advanced computing accelerators an enduring goal for the rapid realization of large-scale AI models.However,as transistor scaling approaches physical limits,traditional digital electronic accelerators based on the von Neumann architecture face significant bottlenecks in energy consumption and latency.Optical computing accelerators,leveraging the high bandwidth,low latency,low heat dissipation,and high parallelism of optical devices and transmission over waveguides or free space,offer promising potential to overcome these challenges.In this paper,inspired by the generic architectures of digital electronic accelerators,we conduct a bottom-up review of the principles and applications of optical computing accelerators based on the basic element of computing accelerators–the multiply-accumulate(MAC)unit.Then,we describe how to solve matrix multiplication by composing calculator arrays from different MAC units in diverse architectures,followed by a discussion on the two main applications where optical computing accelerators are reported to have advantages over electronic computing.Finally,the challenges of optical computing and our perspective on its future development are presented.Moreover,we also survey the current state of optical computing in the industry and provide insights into the future commercialization of optical computing.
基金The National Natural Science Foundation of China(No.60573117)the National Basic Research Program of China(973 Program)(No.2007CB310805)
文摘A web service wrapping approach for command line programs,which are commonly used in scientific computing,is proposed.First,software architecture for a basic web service wrapper implementation is given and the functions of the main components are explained.Then after a comprehensive analysis of data transmission and a job life cycle model,a novel proactive file transmission and job management mechanism is devised to enhance the software architecture,and the command line programs are wrapped into web services in such a way that they can efficiently transmit files,supply instant status feedback and automatically manage the jobs.Experiments show that the proposed approach achieves higher performance with less memory usage compared to the related work, and the usability is also improved.This work has already been put into use in a production system of scientific computing and the data processing efficiency of the system is greatly improved.
基金supported by National Natural Science Foundation of China and Scientific Forefront and Interdisciplinary Innovation Project, Jilin University under Grants No. 60372061,200903296
文摘In this paper, we adopt cloud computing in a specific scientific computing field for its virtualization, distribution and dynamic extendibility as follows: We obtain high-energy parabolic self-similar pulses by numerical simulation using our non-distributed passively mode-locked Er-doped fiber laser model. For researching characteristics of these wave-breaking-free self-similar pulses, chirp of them must be extracted. We propose several time-frequency analysis methods adopted in chirp extraction of ultra-short optical pulses for the first time and discuss the advantages and disadvantages of them in this particular application.
基金Project supported in part by the National Natural Science Foundation of China (Grant No.11771259)the Special Support Program to Develop Innovative Talents in the Region of Shaanxi Province+1 种基金the Innovation Team on Computationally Efficient Numerical Methods Based on New Energy Problems in Shaanxi Provincethe Innovative Team Project of Shaanxi Provincial Department of Education (Grant No.21JP013)。
文摘We present an efficient deep learning method called coupled deep neural networks(CDNNs) for coupling of the Stokes and Darcy–Forchheimer problems. Our method compiles the interface conditions of the coupled problems into the networks properly and can be served as an efficient alternative to the complex coupled problems. To impose energy conservation constraints, the CDNNs utilize simple fully connected layers and a custom loss function to perform the model training process as well as the physical property of the exact solution. The approach can be beneficial for the following reasons: Firstly, we sample randomly and only input spatial coordinates without being restricted by the nature of samples.Secondly, our method is meshfree, which makes it more efficient than the traditional methods. Finally, the method is parallel and can solve multiple variables independently at the same time. We present the theoretical results to guarantee the convergence of the loss function and the convergence of the neural networks to the exact solution. Some numerical experiments are performed and discussed to demonstrate performance of the proposed method.
基金This project is supported by National Natural Science Foundation of China (No.50405009)
文摘In order to realize visualization of three-dimensional data field (TDDF) in instrument, two methods of visualization of TDDF and the usual manner of quick graphic and image processing are analyzed. And how to use OpenGL technique and the characteristic of analyzed data to construct a TDDF, the ways of reality processing and interactive processing are described. Then the medium geometric element and a related realistic model are constructed by means of the first algorithm. Models obtained for attaching the third dimension in three-dimensional data field are presented. An example for TDDF realization of machine measuring is provided. The analysis of resultant graphic indicates that the three-dimensional graphics built by the method developed is featured by good reality, fast processing and strong interaction
文摘Radial Basis Function methods for scattered data interpolation and for the numerical solution of PDEs were originally implemented in a global manner. Subsequently, it was realized that the methods could be implemented more efficiently in a local manner and that the local approaches could match or even surpass the accuracy of the global implementations. In this work, three localization approaches are compared: a local RBF method, a partition of unity method, and a recently introduced modified partition of unity method. A simple shape parameter selection method is introduced and the application of artificial viscosity to stabilize each of the local methods when approximating time-dependent PDEs is reviewed. Additionally, a new type of quasi-random center is introduced which may be better choices than other quasi-random points that are commonly used with RBF methods. All the results within the manuscript are reproducible as they are included as examples in the freely available Python Radial Basis Function Toolbox.
基金the National Natural Science Foundation of China(Nos.41172251 and 41002097)
文摘In this paper, parallel library, portable extensible toolkit for scientific computation (FETSc), 18 used to solve linear systems in soil-water coupled finite element method (FEM) for geotechnical problems. The parallel environment is integrated into GLEAVES, which is a geotechnical software package used for the finite elementsimulation. The linear system Ax = b which is a fundamental and the most time-consuming part of the FEM is solved with iterative solvers in PETSc. In order to find a robust and effective combination of iterative solvers and corresponding preconditioners for the soil-water coupled problems, performance evaluations on Krylov subspace methods and four preconditioners are carried out. The results indicate that general minimal residual (GMRES) method coupled with preconditioners can provide an effective solution. The application to a construction project is presented to illustrate the potential of the proposed solution.
文摘It briefly describes the techniques of Visualization in Scientific Computation (ViSC). Combining Open GL, a 3D graphic library, we discuss and analyze some visualization techniques in electromagnetic engineering.
基金supported by the Natural Science Foundation of Hunan Province(No.2022JJ10066)the National Natural Science Foundation of China(Grant No.62272477).
文摘1 Introduction The perpetual demand for computational power in scientific computing incessantly propels high-performance computing(HPC)systems toward Zettascale computation and even beyond[1,2].Concurrently,the ascension of artificial intelligence(AI)has engendered a marked surge in computational requisites,doubling the required computation performance by every 3.4 months[3].The collective pursuit of ultra-high computational capability has positioned AI and scientific computing as the preeminent twin drivers of HPC.
文摘Aworld-renowned expert on non-conforming finite elementmethods,Prof.Zhong-Ci Shi,had left a lasting impact on the overarching development of Chinese computational mathematics and scientific computing as well as computational mathematics worldwide through his tireless and visionary leadership.His dedication to developing academic programs in many leading Chinese universities and institutions,along with his stewardship in establishing national computational research programs,played a fundamental role in elevating Chinese computational research status to a global recognition.Several generations of Chinese scholars benefited from his guidance and teaching,and this focused issue of original research contributions from 33 Chinese and international scientist teams bears witness to the profound influence of Prof.Shi on their research careers by his lifetime of work spanning over six decades.Beyond being an accomplished mathematician,Prof.Shi had lived a life of a cultured man who radiated vigor and warmth,imbuing his life with heartiness,openness,and honesty.
基金This work was financially supported by the National Key R&D Program of China(Nos.2019YFB2205100 and 2021ZD0201201)the National Natural Science Foundation of China(Grant Nos.92064012 and 61874164).
文摘With the rapid growth of computer science and big data,the traditional von Neumann architecture suffers the aggravating data communication costs due to the separated structure of the processing units and memories.Memristive in-memory computing paradigm is considered as a prominent candidate to address these issues,and plentiful applications have been demonstrated and verified.These applications can be broadly categorized into two major types:soft computing that can tolerant uncertain and imprecise results,and hard computing that emphasizes explicit and precise numerical results for each task,leading to different requirements on the computational accuracies and the corresponding hardware solutions.In this review,we conduct a thorough survey of the recent advances of memristive in-memory computing applications,both on the soft computing type that focuses on artificial neural networks and other machine learning algorithms,and the hard computing type that includes scientific computing and digital image processing.At the end of the review,we discuss the remaining challenges and future opportunities of memristive in-memory computing in the incoming Artificial Intelligence of Things era.
文摘 The "Large Scale Scientific Computation (LSSC) Research"project is one of the State Major Basic Research projects funded by the Chinese Ministry of Science and Technology in the field ofinformation science and technology.……