Thework presents the electronic structure computations and optical spectroscopy studies of half-Heusler ScNiBi and YNiBi compounds.Our first-principles computations of the electronic structures were based on density f...Thework presents the electronic structure computations and optical spectroscopy studies of half-Heusler ScNiBi and YNiBi compounds.Our first-principles computations of the electronic structures were based on density functional theory accounting for spin-orbit coupling.These compounds are computed to be semiconductors.The calculated gap values make ScNiBi and YNiBi valid for thermoelectric and optoelectronic applications and as selective filters.In ScNiBi and YNiBi,an intense peak at the energy of−2 eV is composed of theNi 3d states in the conduction band,and the valence band mostly contains these states with some contributions from the Bi 6p and Sc 3d or Y 4d electronic states.These states participate in the formation of the indirect gap of 0.16 eV(ScNiBi)and 0.18 eV(YNiBi).Within the spectral ellipsometry technique in the interval 0.22–15μm of wavelength,the optical functions of materials are studied,and their dispersion features are revealed.A good matching of the experimental and modeled optical conductivity spectra allowed us to analyze orbital contributions.The abnormally low optical absorption observed in the low-energy region of the spectrum is referred to as the results of band calculations indicating a small density of electronic states near the Fermi energy of these complex materials.展开更多
Complicated changes occur inside the steel parts during quenching process. A three dimensional nonlinear mathematical model for quenching process has been established and the numerical simulation on temperature field,...Complicated changes occur inside the steel parts during quenching process. A three dimensional nonlinear mathematical model for quenching process has been established and the numerical simulation on temperature field, microstructure and stress field has been realized. The alternative technique for the formation of high-strength materials has been developed on the basis of intensification of heat transfer at phase transformations. The technology for the achievement of maximum compressive residual stresses on the hard surface is introduced. It has been shown that there is an optimal depth of hard layer providing the maximum compression stresses on the surface. It has also been established that in the surface hard layer additional strengthening (superstrengthening) of the material is observed. The generalized formula for the determination of the time of reaching maximum compressive stresses on the surface has been proposed.展开更多
In this paper,we provide a new approach to data encryption using generalized inverses.Encryption is based on the implementation of weighted Moore–Penrose inverse A y MNenxmT over the nx8 constant matrix.The square He...In this paper,we provide a new approach to data encryption using generalized inverses.Encryption is based on the implementation of weighted Moore–Penrose inverse A y MNenxmT over the nx8 constant matrix.The square Hermitian positive definite matrix N8x8 p is the key.The proposed solution represents a very strong key since the number of different variants of positive definite matrices of order 8 is huge.We have provided NIST(National Institute of Standards and Technology)quality assurance tests for a random generated Hermitian matrix(a total of 10 different tests and additional analysis with approximate entropy and random digression).In the additional testing of the quality of the random matrix generated,we can conclude that the results of our analysis satisfy the defined strict requirements.This proposed MP encryption method can be applied effectively in the encryption and decryption of images in multi-party communications.In the experimental part of this paper,we give a comparison of encryption methods between machine learning methods.Machine learning algorithms could be compared by achieved results of classification concentrating on classes.In a comparative analysis,we give results of classifying of advanced encryption standard(AES)algorithm and proposed encryption method based on Moore–Penrose inverse.展开更多
The root multiple signal classification(root-MUSIC) algorithm is one of the most important techniques for direction of arrival(DOA) estimation. Using a uniform linear array(ULA) composed of M sensors, this metho...The root multiple signal classification(root-MUSIC) algorithm is one of the most important techniques for direction of arrival(DOA) estimation. Using a uniform linear array(ULA) composed of M sensors, this method usually estimates L signal DOAs by finding roots that lie closest to the unit circle of a(2M-1)-order polynomial, where L 〈 M. A novel efficient root-MUSIC-based method for direction estimation is presented, in which the order of polynomial is efficiently reduced to 2L. Compared with the unitary root-MUSIC(U-root-MUSIC) approach which involves real-valued computations only in the subspace decomposition stage, both tasks of subspace decomposition and polynomial rooting are implemented with real-valued computations in the new technique,which hence shows a significant efficiency advantage over most state-of-the-art techniques. Numerical simulations are conducted to verify the correctness and efficiency of the new estimator.展开更多
In this paper, a kind of second-order two-scale (SOTS) computation is developed for conductive-radiative heat trans- fer problem in periodic porous materials. First of all, by the asymptotic expansion of the tempera...In this paper, a kind of second-order two-scale (SOTS) computation is developed for conductive-radiative heat trans- fer problem in periodic porous materials. First of all, by the asymptotic expansion of the temperature field, the cell problem, homogenization problem, and second-order correctors are obtained successively. Then, the corresponding finite element al- gorithms are proposed. Finally, some numerical results are presented and compared with theoretical results. The numerical results of the proposed algorithm conform with those of the FE algorithm well, demonstrating the accuracy of the present method and its potential applications in thermal engineering of porous materials.展开更多
The quantitative rules of the transfer and variation of errors,when the Gaussian integral functions F.(z) are evaluated sequentially by recurring,have been expounded.The traditional viewpoint to negate the applicabili...The quantitative rules of the transfer and variation of errors,when the Gaussian integral functions F.(z) are evaluated sequentially by recurring,have been expounded.The traditional viewpoint to negate the applicability and reliability of upward recursive formula in principle is amended.An optimal scheme of upward-and downward-joint recursions has been developed for the sequential F(z) computations.No additional accuracy is needed with the fundamental term of recursion because the absolute error of Fn(z) always decreases with the recursive approach.The scheme can be employed in modifying any of existent subprograms for Fn<z> computations.In the case of p-d-f-and g-type Gaussians,combining this method with Schaad's formulas can reduce,at least,the additive operations by a factor 40%;the multiplicative and exponential operations by a factor 60%.展开更多
In recent years, high performance scientific computing under workstation cluster connected by local area network is becoming a hot point. Owing to both the longer latency and the higher overhead for protocol processin...In recent years, high performance scientific computing under workstation cluster connected by local area network is becoming a hot point. Owing to both the longer latency and the higher overhead for protocol processing compared with the powerful single workstation capacity, it is becoming severe important to keep balance not only for numerical load but also for communication load, and to overlap communications with computations while parallel computing. Hence,our efficiency evaluation rules must discover these capacities of a given parallel algorithm in order to optimize the existed algorithm to attain its highest parallel efficiency. The traditional efficiency evaluation rules can not succeed in this work any more. Fortunately, thanks to Culler's detail discuss in LogP model about interconnection networks for MPP systems, we present a system of efficiency evaluation rules for parallel computations under workstation cluster with PVM3.0 parallel software framework in this paper. These rules can satisfy above acquirements successfully. At last, two typical synchronous,and asynchronous applications are designed to verify the validity of these rules under 4 SGIs workstations cluster connected by Ethernet.展开更多
Noise generated by civil transport aircraft during take-off and approach-to-land phases of operation is an environmental problem. The aircraft noise problem is firstly reviewed in this article. The review is followed ...Noise generated by civil transport aircraft during take-off and approach-to-land phases of operation is an environmental problem. The aircraft noise problem is firstly reviewed in this article. The review is followed by a description and assessment of a number of sound propagation methods suitable for applications with a background mean flow field pertinent to aircraft noise. Of the three main areas of the noise problem, i.e. generation, propagation, and ra- diation, propagation provides a vital link between near-field noise generation and far-field radiation. Its accurate assessment ensures the overall validity of a prediction model. Of the various classes of propagation equations, linearised Euler equations are often casted in either time domain or frequency domain. The equations are often solved numerically by computational aeroacoustics techniques, bur are subject to the onset of Kelvin-Helmholtz (K-H) instability modes which may ruin the solutions. Other forms of linearised equations, e.g. acoustic perturbation equations have been proposed, with differing degrees of success.展开更多
The mixing and merging characteristics of multiple tandem jets in crossflow are investigated by use of the Computational Fluid Dynamics (CFD) code FI,UENT. The realizable k - ε model is employed for turbulent elosu...The mixing and merging characteristics of multiple tandem jets in crossflow are investigated by use of the Computational Fluid Dynamics (CFD) code FI,UENT. The realizable k - ε model is employed for turbulent elosure of the Reynolds-averaged Navier-Stokes equations. Numerical experiments are performed for 1-, 2- and 4-jet groups, tbr jet-tocrossflow velocity ratios of R = 4.2 ~ 16.3. The computed velocity and scalar concentration field are in good agreement with experiments using Particle Image Velocimetry (PIV) and Laser Induced Fluorescence (LIF), as well as previous work. The results show that the leading jet behavior is similar to a single free jet in crossflow, while all the downstream rear jets have less bent-over jet trajectories - suggesting a reduced ambient velocity for the rear jets. The concentration decay of the leading jet is greater than that of the rear jets. When normalized by appropriate crossflow momentum length scales, all jet trajectories follow a universal relation regardless of the sequential order of jet position and the nund)er of jets. Supported by the velocity and trajectory measurements, the averaged maximum effective crossflow velocity ratio is computed to be in the range of 0.39 to 0.47.展开更多
We propose an efficient and robust way to design absorbing boundary conditions in atomistic computations. An optimal discrete boundary condition is obtained by minimizing a functional of a reflection coefficient integ...We propose an efficient and robust way to design absorbing boundary conditions in atomistic computations. An optimal discrete boundary condition is obtained by minimizing a functional of a reflection coefficient integral over a range of wave numbers. The minimization is performed with respect to a set of wave numbers, at which transparent absorption is reached. Compared with the optimization with respect to the boundary condition coefficients suggested by E and Huang [Phys. Rev. Lett. 87 (2001) 133501], we reduce considerably the number of independent variables and the computing cost. We further demonstrate with numerical examples that both the optimization and the wave absorption are more robust in the proposed design.展开更多
There have been published many papers on VLF (very low frequency) characteristics to study seismo-ionospheric perturbations. Usually VLF records (amplitude and/or phase) are used to investigate mainly the temporal evo...There have been published many papers on VLF (very low frequency) characteristics to study seismo-ionospheric perturbations. Usually VLF records (amplitude and/or phase) are used to investigate mainly the temporal evolution of VLF propagation anomalies with special attention to one particular propagation path. The most important advantage of this paper is the simultaneous use of several propagation paths. A succession of earthquakes (EQs) happened in the Kumamoto area in Kyusyu Island;two strong foreshocks with magnitude of 6.5 and 6.4 on 14 April (UT) and the main shock with magnitude 7.3 on 15 April (UT). Because the EQ epicenters are not far from the VLF transmitter (with the call sign of JJI in Miyazaki prefecture), we can utilize simultaneously 8 observing stations of our network all over Japan. Together with the use of theoretical computations based on wave-hop theory, we try to trace both the temporal and spatial evolutions of the ionospheric perturbation associated with this succession of EQs. It is found that the ionospheric perturbation begins to appear about two weeks before the EQs, and this perturbation becomes most developed 5 - 3 days before the main shock. When the perturbation is most disturbed, the maximum change in vertical direction is depletion in the VLF effective ionospheric height of the order of 10 km, and its horizontal scale (or its radius) is about 1000 km. These spatio-temporal changes of the seismo-ionospheric perturbation will be investigated in details in the discus-sion, a comparison has made with the VLF characteristics of the 1995 Kobe with the same magnitude and of the same fault-type, and a brief discussion on the generation mechanism of seismo-ionospheric perturbation is finally made.展开更多
This paper proposes algorithm for Increasing Virtual Machine Security Strategy in Cloud Computing computations.Imbalance between load and energy has been one of the disadvantages of old methods in providing server and...This paper proposes algorithm for Increasing Virtual Machine Security Strategy in Cloud Computing computations.Imbalance between load and energy has been one of the disadvantages of old methods in providing server and hosting,so that if two virtual severs be active on a host and energy load be more on a host,it would allocated the energy of other hosts(virtual host)to itself to stay steady and this option usually leads to hardware overflow errors and users dissatisfaction.This problem has been removed in methods based on cloud processing but not perfectly,therefore,providing an algorithm not only will implement a suitable security background but also it will suitably divide energy consumption and load balancing among virtual severs.The proposed algorithm is compared with several previously proposed Security Strategy including SC-PSSF,PSSF and DEEAC.Comparisons show that the proposed method offers high performance computing,efficiency and consumes lower energy in the network.展开更多
Verification in quantum computations is crucial since quantum systems are extremely vulnerable to the environment.However,verifying directly the output of a quantum computation is difficult since we know that efficien...Verification in quantum computations is crucial since quantum systems are extremely vulnerable to the environment.However,verifying directly the output of a quantum computation is difficult since we know that efficiently simulating a large-scale quantum computation on a classical computer is usually thought to be impossible.To overcome this difficulty,we propose a self-testing system for quantum computations,which can be used to verify if a quantum computation is performed correctly by itself.Our basic idea is using some extra ancilla qubits to test the output of the computation.We design two kinds of permutation circuits into the original quantum circuit:one is applied on the ancilla qubits whose output indicates the testing information,the other is applied on all qubits(including ancilla qubits) which is aiming to uniformly permute the positions of all qubits.We show that both permutation circuits are easy to achieve.By this way,we prove that any quantum computation has an efficient self-testing system.In the end,we also discuss the relation between our self-testing system and interactive proof systems,and show that the two systems are equivalent if the verifier is allowed to have some quantum capacity.展开更多
In this paper, a class of slightly perturbed equations of the form F(x)= ξ -x+αΦ(x) will be treated graphically and symbolically, where Φ(x) is an analytic function of x. For graphical developments, we set up a si...In this paper, a class of slightly perturbed equations of the form F(x)= ξ -x+αΦ(x) will be treated graphically and symbolically, where Φ(x) is an analytic function of x. For graphical developments, we set up a simple graphical method for the real roots of the equation F(x)=0 illustrated by four transcendental equations. In fact, the graphical solution usually provides excellent initial conditions for the iterative solution of the equation. A property avoiding the critical situations between divergent to very slow convergent solutions may exist in the iterative methods in which no good initial condition close to the root is available. For the analytical developments, literal analytical solutions are obtained for the most celebrated slightly perturbed equation which is Kepler’s equation of elliptic orbit. Moreover, the effect of the orbital eccentricity on the rate of convergence of the series is illustrated graphically.展开更多
Task scheduling determines the performance of NOW computing to a large extent. However, the computer system architecture, computing capability and system load are rarely proposed together. In this paper, a biggest het...Task scheduling determines the performance of NOW computing to a large extent. However, the computer system architecture, computing capability and system load are rarely proposed together. In this paper, a biggest heterogeneous scheduling algorithm is presented. It fully considers the system characteristics (from application view), structure and state. So it always can utilize all processing resource under a reasonable premise. The results of experiment show the algorithm can significantly shorten the response time of jobs.展开更多
This paper discusses the validity of (adaptive) Lagrange generalized plain finite element method (FEM) and plate element method for accurate analysis of acoustic waves in multi-layered piezoelectric structures with ti...This paper discusses the validity of (adaptive) Lagrange generalized plain finite element method (FEM) and plate element method for accurate analysis of acoustic waves in multi-layered piezoelectric structures with tiny interfaces between metal electrodes and surface mounted piezoelectric substrates. We have come to conclusion that the quantitative relationships between the acoustic and electric fields in a piezoelectric structure can be accurately determined through the proposed finite element methods. The higher-order Lagrange FEM proposed for dynamic piezoelectric computation is proved to be very accurate (prescribed relative error 0.02% - 0.04% ) and a great improvement in convergence accuracy over the higher order Mindlin plate element method for piezoelectric structural analysis due to the assumptions and corrections in the plate theories.The converged lagrange finite element methods are compared with the plate element methods and the computedresults are in good agreement with available exact and experimental data. The adaptive Lagrange finite elementmethods and a new FEA computer program developed for macro- and micro-scale analyses are reviewed, and recently extended with great potential to high-precision nano-scale analysis in this paper and the similarities between piezoelectric and seismic wave propagations in layered structures and plates are stressed.展开更多
We present multi-threading and SIMD optimizations on short-range potential calculation kernel in Molecular Dynamics.For the multi-threading optimization,we design a partition-and-two-steps(PTS)method to avoid write co...We present multi-threading and SIMD optimizations on short-range potential calculation kernel in Molecular Dynamics.For the multi-threading optimization,we design a partition-and-two-steps(PTS)method to avoid write conflicts caused by using Newton’s third law.Our method eliminates serialization bottle-neck without extra memory.We implement our PTS method using OpenMP.Afterwards,we discuss the influence of the cutoff if statement on the performance of vectorization in MD simulations.We propose a pre-searching neighbors method,which makes about 70%atoms meet the cutoff check,reducing a large amount of redundant calculation.The experiment results prove our PTS method is scalable and efficient.In double precision,our 256-bit SIMD implementation is about 3×faster than the scalar version.展开更多
A posteriori error computations in the space-time coupled and space-time decoupled finite element methods for initial value problems are essential:1)to determine the accuracy of the computed evolution,2)if the errors ...A posteriori error computations in the space-time coupled and space-time decoupled finite element methods for initial value problems are essential:1)to determine the accuracy of the computed evolution,2)if the errors in the coupled solutions are higher than an acceptable threshold,then a posteriori error computations provide measures for designing adaptive processes to improve the accuracy of the solution.How well the space-time approximation in each of the two methods satisfies the equations in the mathematical model over the space-time domain in the point wise sense is the absolute measure of the accuracy of the computed solution.When L2-norm of the space-time residual over the space-time domain of the computations approaches zero,the approximation φh(x,t)(,)→φ(x,t),the theoretical solution.Thus,the proximity of ||E||L_(2) ,the L_(2)-norm of the space-time residual function,to zero is a measure of the accuracy or the error in the computed solution.In this paper,we present a methodology and a computational framework for computing L2 E in the a posteriori error computations for both space-time coupled and space-time decoupled finite element methods.It is shown that the proposed a posteriori computations require h,p,k framework in both space-time coupled as well as space-time decoupled finite element methods to ensure that space-time integrals over space-time discretization are Riemann,hence the proposed a posteriori computations can not be performed in finite difference and finite volume methods of solving initial value problems.High-order global differentiability in time in the integration methods is essential in space-time decoupled method for posterior computations.This restricts the use of methods like Euler’s method,Runge-Kutta methods,etc.,in the time integration of ODE’s in time.Mathematical and computational details including model problem studies are presented in the paper.To authors knowledge,it is the first presentation of the proposed a posteriori error computation methodology and computational infrastructure for initial value problems.展开更多
The emulation of human multisensory functions to construct artificial perception systems is an intriguing challenge for developing humanoid robotics and cross-modal human–machine interfaces.Inspired by human multisen...The emulation of human multisensory functions to construct artificial perception systems is an intriguing challenge for developing humanoid robotics and cross-modal human–machine interfaces.Inspired by human multisensory signal generation and neuroplasticity-based signal processing,here,an artificial perceptual neuro array with visual-tactile sensing,processing,learning,and memory is demonstrated.The neuromorphic bimodal perception array compactly combines an artificial photoelectric synapse network and an integrated mechanoluminescent layer,endowing individual and synergistic plastic modulation of optical and mechanical information,including short-term memory,long-term memory,paired pulse facilitation,and“learning-experience”behavior.Sequential or superimposed visual and tactile stimuli inputs can efficiently simulate the associative learning process of“Pavlov's dog”.The fusion of visual and tactile modulation enables enhanced memory of the stimulation image during the learning process.A machine-learning algorithm is coupled with an artificial neural network for pattern recognition,achieving a recognition accuracy of 70%for bimodal training,which is higher than that obtained by unimodal training.In addition,the artificial perceptual neuron has a low energy consumption of~20 pJ.With its mechanical compliance and simple architecture,the neuromorphic bimodal perception array has promising applications in largescale cross-modal interactions and high-throughput intelligent perceptions.展开更多
As the size of transistors shrinks and power density increases,thermal simulation has become an indispensable part of the device design procedure.However,existing works for advanced technology transistors use simplifi...As the size of transistors shrinks and power density increases,thermal simulation has become an indispensable part of the device design procedure.However,existing works for advanced technology transistors use simplified empirical models to calculate effective thermal conductivity in the simulations.In this work,we present a dataset of size-dependent effective thermal conductivity with electron and phonon properties extracted from ab initio computations.Absolute in-plane and cross-plane thermal conductivity data of eight semiconducting materials(Si,Ge,GaN,AlN,4H-SiC,GaAs,InAs,BAs)and four metallic materials(Al,W,TiN,Ti)with the characteristic length ranging from 5 nm to 50 nm have been provided.Besides the absolute value,normalized effective thermal conductivity is also given,in case it needs to be used with updated bulk thermal conductivity in the future.展开更多
文摘Thework presents the electronic structure computations and optical spectroscopy studies of half-Heusler ScNiBi and YNiBi compounds.Our first-principles computations of the electronic structures were based on density functional theory accounting for spin-orbit coupling.These compounds are computed to be semiconductors.The calculated gap values make ScNiBi and YNiBi valid for thermoelectric and optoelectronic applications and as selective filters.In ScNiBi and YNiBi,an intense peak at the energy of−2 eV is composed of theNi 3d states in the conduction band,and the valence band mostly contains these states with some contributions from the Bi 6p and Sc 3d or Y 4d electronic states.These states participate in the formation of the indirect gap of 0.16 eV(ScNiBi)and 0.18 eV(YNiBi).Within the spectral ellipsometry technique in the interval 0.22–15μm of wavelength,the optical functions of materials are studied,and their dispersion features are revealed.A good matching of the experimental and modeled optical conductivity spectra allowed us to analyze orbital contributions.The abnormally low optical absorption observed in the low-energy region of the spectrum is referred to as the results of band calculations indicating a small density of electronic states near the Fermi energy of these complex materials.
文摘Complicated changes occur inside the steel parts during quenching process. A three dimensional nonlinear mathematical model for quenching process has been established and the numerical simulation on temperature field, microstructure and stress field has been realized. The alternative technique for the formation of high-strength materials has been developed on the basis of intensification of heat transfer at phase transformations. The technology for the achievement of maximum compressive residual stresses on the hard surface is introduced. It has been shown that there is an optimal depth of hard layer providing the maximum compression stresses on the surface. It has also been established that in the surface hard layer additional strengthening (superstrengthening) of the material is observed. The generalized formula for the determination of the time of reaching maximum compressive stresses on the surface has been proposed.
基金the support of Network Communication Technology(NCT)Research Groups,FTSM,UKM in providing facilities for this research.This paper is supported under the Dana Impak Perdana UKM DIP-2018-040 and Fundamental Research Grant Scheme FRGS/1/2018/TK04/UKM/02/7.
文摘In this paper,we provide a new approach to data encryption using generalized inverses.Encryption is based on the implementation of weighted Moore–Penrose inverse A y MNenxmT over the nx8 constant matrix.The square Hermitian positive definite matrix N8x8 p is the key.The proposed solution represents a very strong key since the number of different variants of positive definite matrices of order 8 is huge.We have provided NIST(National Institute of Standards and Technology)quality assurance tests for a random generated Hermitian matrix(a total of 10 different tests and additional analysis with approximate entropy and random digression).In the additional testing of the quality of the random matrix generated,we can conclude that the results of our analysis satisfy the defined strict requirements.This proposed MP encryption method can be applied effectively in the encryption and decryption of images in multi-party communications.In the experimental part of this paper,we give a comparison of encryption methods between machine learning methods.Machine learning algorithms could be compared by achieved results of classification concentrating on classes.In a comparative analysis,we give results of classifying of advanced encryption standard(AES)algorithm and proposed encryption method based on Moore–Penrose inverse.
基金supported by the National Natural Science Foundation of China(61501142)the Shandong Provincial Natural Science Foundation(ZR2014FQ003)+1 种基金the Special Foundation of China Postdoctoral Science(2016T90289)the China Postdoctoral Science Foundation(2015M571414)
文摘The root multiple signal classification(root-MUSIC) algorithm is one of the most important techniques for direction of arrival(DOA) estimation. Using a uniform linear array(ULA) composed of M sensors, this method usually estimates L signal DOAs by finding roots that lie closest to the unit circle of a(2M-1)-order polynomial, where L 〈 M. A novel efficient root-MUSIC-based method for direction estimation is presented, in which the order of polynomial is efficiently reduced to 2L. Compared with the unitary root-MUSIC(U-root-MUSIC) approach which involves real-valued computations only in the subspace decomposition stage, both tasks of subspace decomposition and polynomial rooting are implemented with real-valued computations in the new technique,which hence shows a significant efficiency advantage over most state-of-the-art techniques. Numerical simulations are conducted to verify the correctness and efficiency of the new estimator.
基金Project supported by the National Basic Research Program of China(Grant No.2010CB832702)the National Natural Science Foundation of China(Grant No.90916027)
文摘In this paper, a kind of second-order two-scale (SOTS) computation is developed for conductive-radiative heat trans- fer problem in periodic porous materials. First of all, by the asymptotic expansion of the temperature field, the cell problem, homogenization problem, and second-order correctors are obtained successively. Then, the corresponding finite element al- gorithms are proposed. Finally, some numerical results are presented and compared with theoretical results. The numerical results of the proposed algorithm conform with those of the FE algorithm well, demonstrating the accuracy of the present method and its potential applications in thermal engineering of porous materials.
文摘The quantitative rules of the transfer and variation of errors,when the Gaussian integral functions F.(z) are evaluated sequentially by recurring,have been expounded.The traditional viewpoint to negate the applicability and reliability of upward recursive formula in principle is amended.An optimal scheme of upward-and downward-joint recursions has been developed for the sequential F(z) computations.No additional accuracy is needed with the fundamental term of recursion because the absolute error of Fn(z) always decreases with the recursive approach.The scheme can be employed in modifying any of existent subprograms for Fn<z> computations.In the case of p-d-f-and g-type Gaussians,combining this method with Schaad's formulas can reduce,at least,the additive operations by a factor 40%;the multiplicative and exponential operations by a factor 60%.
文摘In recent years, high performance scientific computing under workstation cluster connected by local area network is becoming a hot point. Owing to both the longer latency and the higher overhead for protocol processing compared with the powerful single workstation capacity, it is becoming severe important to keep balance not only for numerical load but also for communication load, and to overlap communications with computations while parallel computing. Hence,our efficiency evaluation rules must discover these capacities of a given parallel algorithm in order to optimize the existed algorithm to attain its highest parallel efficiency. The traditional efficiency evaluation rules can not succeed in this work any more. Fortunately, thanks to Culler's detail discuss in LogP model about interconnection networks for MPP systems, we present a system of efficiency evaluation rules for parallel computations under workstation cluster with PVM3.0 parallel software framework in this paper. These rules can satisfy above acquirements successfully. At last, two typical synchronous,and asynchronous applications are designed to verify the validity of these rules under 4 SGIs workstations cluster connected by Ethernet.
文摘Noise generated by civil transport aircraft during take-off and approach-to-land phases of operation is an environmental problem. The aircraft noise problem is firstly reviewed in this article. The review is followed by a description and assessment of a number of sound propagation methods suitable for applications with a background mean flow field pertinent to aircraft noise. Of the three main areas of the noise problem, i.e. generation, propagation, and ra- diation, propagation provides a vital link between near-field noise generation and far-field radiation. Its accurate assessment ensures the overall validity of a prediction model. Of the various classes of propagation equations, linearised Euler equations are often casted in either time domain or frequency domain. The equations are often solved numerically by computational aeroacoustics techniques, bur are subject to the onset of Kelvin-Helmholtz (K-H) instability modes which may ruin the solutions. Other forms of linearised equations, e.g. acoustic perturbation equations have been proposed, with differing degrees of success.
基金The workis supported by a grant fromthe Hong Kong Research Grants Council (HKU7347/01E) Programfor NewCentury Excellent Talents in University (NCET-04-0494) the National Natural Science Foundation of China(Grant No.50479068)
文摘The mixing and merging characteristics of multiple tandem jets in crossflow are investigated by use of the Computational Fluid Dynamics (CFD) code FI,UENT. The realizable k - ε model is employed for turbulent elosure of the Reynolds-averaged Navier-Stokes equations. Numerical experiments are performed for 1-, 2- and 4-jet groups, tbr jet-tocrossflow velocity ratios of R = 4.2 ~ 16.3. The computed velocity and scalar concentration field are in good agreement with experiments using Particle Image Velocimetry (PIV) and Laser Induced Fluorescence (LIF), as well as previous work. The results show that the leading jet behavior is similar to a single free jet in crossflow, while all the downstream rear jets have less bent-over jet trajectories - suggesting a reduced ambient velocity for the rear jets. The concentration decay of the leading jet is greater than that of the rear jets. When normalized by appropriate crossflow momentum length scales, all jet trajectories follow a universal relation regardless of the sequential order of jet position and the nund)er of jets. Supported by the velocity and trajectory measurements, the averaged maximum effective crossflow velocity ratio is computed to be in the range of 0.39 to 0.47.
基金Supported in part by the National Natural Science Foundation of China under Grant No 10872004, the National Basic Research Program of China under Grant No 2007CB814800, and the Ministry of Education of China under Grant Nos NCET-06-0011 and 200800010013.
文摘We propose an efficient and robust way to design absorbing boundary conditions in atomistic computations. An optimal discrete boundary condition is obtained by minimizing a functional of a reflection coefficient integral over a range of wave numbers. The minimization is performed with respect to a set of wave numbers, at which transparent absorption is reached. Compared with the optimization with respect to the boundary condition coefficients suggested by E and Huang [Phys. Rev. Lett. 87 (2001) 133501], we reduce considerably the number of independent variables and the computing cost. We further demonstrate with numerical examples that both the optimization and the wave absorption are more robust in the proposed design.
文摘There have been published many papers on VLF (very low frequency) characteristics to study seismo-ionospheric perturbations. Usually VLF records (amplitude and/or phase) are used to investigate mainly the temporal evolution of VLF propagation anomalies with special attention to one particular propagation path. The most important advantage of this paper is the simultaneous use of several propagation paths. A succession of earthquakes (EQs) happened in the Kumamoto area in Kyusyu Island;two strong foreshocks with magnitude of 6.5 and 6.4 on 14 April (UT) and the main shock with magnitude 7.3 on 15 April (UT). Because the EQ epicenters are not far from the VLF transmitter (with the call sign of JJI in Miyazaki prefecture), we can utilize simultaneously 8 observing stations of our network all over Japan. Together with the use of theoretical computations based on wave-hop theory, we try to trace both the temporal and spatial evolutions of the ionospheric perturbation associated with this succession of EQs. It is found that the ionospheric perturbation begins to appear about two weeks before the EQs, and this perturbation becomes most developed 5 - 3 days before the main shock. When the perturbation is most disturbed, the maximum change in vertical direction is depletion in the VLF effective ionospheric height of the order of 10 km, and its horizontal scale (or its radius) is about 1000 km. These spatio-temporal changes of the seismo-ionospheric perturbation will be investigated in details in the discus-sion, a comparison has made with the VLF characteristics of the 1995 Kobe with the same magnitude and of the same fault-type, and a brief discussion on the generation mechanism of seismo-ionospheric perturbation is finally made.
文摘This paper proposes algorithm for Increasing Virtual Machine Security Strategy in Cloud Computing computations.Imbalance between load and energy has been one of the disadvantages of old methods in providing server and hosting,so that if two virtual severs be active on a host and energy load be more on a host,it would allocated the energy of other hosts(virtual host)to itself to stay steady and this option usually leads to hardware overflow errors and users dissatisfaction.This problem has been removed in methods based on cloud processing but not perfectly,therefore,providing an algorithm not only will implement a suitable security background but also it will suitably divide energy consumption and load balancing among virtual severs.The proposed algorithm is compared with several previously proposed Security Strategy including SC-PSSF,PSSF and DEEAC.Comparisons show that the proposed method offers high performance computing,efficiency and consumes lower energy in the network.
基金Project supported by the National Natural Science Foundation of China(Grant Nos.61372076,61971348,and 62001351)Foundation of Shaanxi Key Laboratory of Information Communication Network and Security(Grant No.ICNS201802)+1 种基金Natural Science Basic Research Program of Shaanxi,China(Grant No.2021JM-142)Key Research and Development Program of Shaanxi Province,China(Grant No.2019ZDLGY09-02)。
文摘Verification in quantum computations is crucial since quantum systems are extremely vulnerable to the environment.However,verifying directly the output of a quantum computation is difficult since we know that efficiently simulating a large-scale quantum computation on a classical computer is usually thought to be impossible.To overcome this difficulty,we propose a self-testing system for quantum computations,which can be used to verify if a quantum computation is performed correctly by itself.Our basic idea is using some extra ancilla qubits to test the output of the computation.We design two kinds of permutation circuits into the original quantum circuit:one is applied on the ancilla qubits whose output indicates the testing information,the other is applied on all qubits(including ancilla qubits) which is aiming to uniformly permute the positions of all qubits.We show that both permutation circuits are easy to achieve.By this way,we prove that any quantum computation has an efficient self-testing system.In the end,we also discuss the relation between our self-testing system and interactive proof systems,and show that the two systems are equivalent if the verifier is allowed to have some quantum capacity.
文摘In this paper, a class of slightly perturbed equations of the form F(x)= ξ -x+αΦ(x) will be treated graphically and symbolically, where Φ(x) is an analytic function of x. For graphical developments, we set up a simple graphical method for the real roots of the equation F(x)=0 illustrated by four transcendental equations. In fact, the graphical solution usually provides excellent initial conditions for the iterative solution of the equation. A property avoiding the critical situations between divergent to very slow convergent solutions may exist in the iterative methods in which no good initial condition close to the root is available. For the analytical developments, literal analytical solutions are obtained for the most celebrated slightly perturbed equation which is Kepler’s equation of elliptic orbit. Moreover, the effect of the orbital eccentricity on the rate of convergence of the series is illustrated graphically.
文摘Task scheduling determines the performance of NOW computing to a large extent. However, the computer system architecture, computing capability and system load are rarely proposed together. In this paper, a biggest heterogeneous scheduling algorithm is presented. It fully considers the system characteristics (from application view), structure and state. So it always can utilize all processing resource under a reasonable premise. The results of experiment show the algorithm can significantly shorten the response time of jobs.
文摘This paper discusses the validity of (adaptive) Lagrange generalized plain finite element method (FEM) and plate element method for accurate analysis of acoustic waves in multi-layered piezoelectric structures with tiny interfaces between metal electrodes and surface mounted piezoelectric substrates. We have come to conclusion that the quantitative relationships between the acoustic and electric fields in a piezoelectric structure can be accurately determined through the proposed finite element methods. The higher-order Lagrange FEM proposed for dynamic piezoelectric computation is proved to be very accurate (prescribed relative error 0.02% - 0.04% ) and a great improvement in convergence accuracy over the higher order Mindlin plate element method for piezoelectric structural analysis due to the assumptions and corrections in the plate theories.The converged lagrange finite element methods are compared with the plate element methods and the computedresults are in good agreement with available exact and experimental data. The adaptive Lagrange finite elementmethods and a new FEA computer program developed for macro- and micro-scale analyses are reviewed, and recently extended with great potential to high-precision nano-scale analysis in this paper and the similarities between piezoelectric and seismic wave propagations in layered structures and plates are stressed.
文摘We present multi-threading and SIMD optimizations on short-range potential calculation kernel in Molecular Dynamics.For the multi-threading optimization,we design a partition-and-two-steps(PTS)method to avoid write conflicts caused by using Newton’s third law.Our method eliminates serialization bottle-neck without extra memory.We implement our PTS method using OpenMP.Afterwards,we discuss the influence of the cutoff if statement on the performance of vectorization in MD simulations.We propose a pre-searching neighbors method,which makes about 70%atoms meet the cutoff check,reducing a large amount of redundant calculation.The experiment results prove our PTS method is scalable and efficient.In double precision,our 256-bit SIMD implementation is about 3×faster than the scalar version.
基金grateful for the facilities provided by the Computational Mechanics Laboratory of the Department of Mechanical Engineering.
文摘A posteriori error computations in the space-time coupled and space-time decoupled finite element methods for initial value problems are essential:1)to determine the accuracy of the computed evolution,2)if the errors in the coupled solutions are higher than an acceptable threshold,then a posteriori error computations provide measures for designing adaptive processes to improve the accuracy of the solution.How well the space-time approximation in each of the two methods satisfies the equations in the mathematical model over the space-time domain in the point wise sense is the absolute measure of the accuracy of the computed solution.When L2-norm of the space-time residual over the space-time domain of the computations approaches zero,the approximation φh(x,t)(,)→φ(x,t),the theoretical solution.Thus,the proximity of ||E||L_(2) ,the L_(2)-norm of the space-time residual function,to zero is a measure of the accuracy or the error in the computed solution.In this paper,we present a methodology and a computational framework for computing L2 E in the a posteriori error computations for both space-time coupled and space-time decoupled finite element methods.It is shown that the proposed a posteriori computations require h,p,k framework in both space-time coupled as well as space-time decoupled finite element methods to ensure that space-time integrals over space-time discretization are Riemann,hence the proposed a posteriori computations can not be performed in finite difference and finite volume methods of solving initial value problems.High-order global differentiability in time in the integration methods is essential in space-time decoupled method for posterior computations.This restricts the use of methods like Euler’s method,Runge-Kutta methods,etc.,in the time integration of ODE’s in time.Mathematical and computational details including model problem studies are presented in the paper.To authors knowledge,it is the first presentation of the proposed a posteriori error computation methodology and computational infrastructure for initial value problems.
基金National Natural Science Foundation of China,Grant/Award Numbers:52002246,52192614,U22A2077,U20A20166,52125205,52372154Natural Science Foundation of Beijing Municipality,Grant/Award Numbers:2222088,Z180011+4 种基金Shenzhen Fundamental Research Project,Grant/Award Number:JCYJ20190808170601664Shenzhen Science and Technology Program,Grant/Award Number:KQTD20170810105439418Science and Technology Innovation Project of Shenzhen Excellent Talents,Grant/Award Number:RCBS20200714114919006National Key R&D Program of China,Grant/Award Numbers:2021YFB3200304,2021YFB3200302Fundamental Research Funds for the Central Universities。
文摘The emulation of human multisensory functions to construct artificial perception systems is an intriguing challenge for developing humanoid robotics and cross-modal human–machine interfaces.Inspired by human multisensory signal generation and neuroplasticity-based signal processing,here,an artificial perceptual neuro array with visual-tactile sensing,processing,learning,and memory is demonstrated.The neuromorphic bimodal perception array compactly combines an artificial photoelectric synapse network and an integrated mechanoluminescent layer,endowing individual and synergistic plastic modulation of optical and mechanical information,including short-term memory,long-term memory,paired pulse facilitation,and“learning-experience”behavior.Sequential or superimposed visual and tactile stimuli inputs can efficiently simulate the associative learning process of“Pavlov's dog”.The fusion of visual and tactile modulation enables enhanced memory of the stimulation image during the learning process.A machine-learning algorithm is coupled with an artificial neural network for pattern recognition,achieving a recognition accuracy of 70%for bimodal training,which is higher than that obtained by unimodal training.In addition,the artificial perceptual neuron has a low energy consumption of~20 pJ.With its mechanical compliance and simple architecture,the neuromorphic bimodal perception array has promising applications in largescale cross-modal interactions and high-throughput intelligent perceptions.
基金Project supported by the National Key R&D Project from Ministry of Science and Technology of China(Grant No.2022YFA1203100)the National Natural Science Foundation of China(Grant No.52122606)the funding from Shanghai Polytechnic University.
文摘As the size of transistors shrinks and power density increases,thermal simulation has become an indispensable part of the device design procedure.However,existing works for advanced technology transistors use simplified empirical models to calculate effective thermal conductivity in the simulations.In this work,we present a dataset of size-dependent effective thermal conductivity with electron and phonon properties extracted from ab initio computations.Absolute in-plane and cross-plane thermal conductivity data of eight semiconducting materials(Si,Ge,GaN,AlN,4H-SiC,GaAs,InAs,BAs)and four metallic materials(Al,W,TiN,Ti)with the characteristic length ranging from 5 nm to 50 nm have been provided.Besides the absolute value,normalized effective thermal conductivity is also given,in case it needs to be used with updated bulk thermal conductivity in the future.