Physics-informed neural networks(PINNs)are promising to replace conventional mesh-based partial tial differen-equation(PDE)solvers by offering more accurate and flexible PDE solutions.However,PINNs are hampered by the...Physics-informed neural networks(PINNs)are promising to replace conventional mesh-based partial tial differen-equation(PDE)solvers by offering more accurate and flexible PDE solutions.However,PINNs are hampered by the relatively slow convergence and the need to perform additional,potentially expensive training for new PDE parameters.To solve this limitation,we introduce LatentPINN,a framework that utilizes latent representations of the PDE parameters as additional(to the coordinates)inputs into PINNs and allows for training over the distribution of these parameters.Motivated by the recent progress on generative models,we promote using latent diffusion models to learn compressed latent representations of the distribution of PDE parameters as they act as input parameters for NN functional solutions.We use a two-stage training scheme in which,in the first stage,we learn the latent representations for the distribution of PDE parameters.In the second stage,we train a physics-informed neural network over inputs given by randomly drawn samples from the coordinate space within the solution domain and samples from the learned latent representation of the PDE parameters.Considering their importance in capturing evolving interfaces and fronts in various fields,we test the approach on a class of level set equations given,for example,by the nonlinear Eikonal equation.We share results corresponding to three Eikonal parameters(velocity models)sets.The proposed method performs well on new phase velocity models without the need for any additional training.展开更多
Extracting discriminative speaker-specific representations from speech signals and transforming them into fixed length vectors are key steps in speaker identification and verification systems.In this study,we propose ...Extracting discriminative speaker-specific representations from speech signals and transforming them into fixed length vectors are key steps in speaker identification and verification systems.In this study,we propose a latent discriminative representation learning method for speaker recognition.We mean that the learned representations in this study are not only discriminative but also relevant.Specifically,we introduce an additional speaker embedded lookup table to explore the relevance between different utterances from the same speaker.Moreover,a reconstruction constraint intended to learn a linear mapping matrix is introduced to make representation discriminative.Experimental results demonstrate that the proposed method outperforms state-of-the-art methods based on the Apollo dataset used in the Fearless Steps Challenge in INTERSPEECH2019 and the TIMIT dataset.展开更多
基金King Abdullah University of Science and Technol-ogy(KAUST)for supporting this research and the Seismic Wave Anal-ysis group for the supportive and encouraging environment.
文摘Physics-informed neural networks(PINNs)are promising to replace conventional mesh-based partial tial differen-equation(PDE)solvers by offering more accurate and flexible PDE solutions.However,PINNs are hampered by the relatively slow convergence and the need to perform additional,potentially expensive training for new PDE parameters.To solve this limitation,we introduce LatentPINN,a framework that utilizes latent representations of the PDE parameters as additional(to the coordinates)inputs into PINNs and allows for training over the distribution of these parameters.Motivated by the recent progress on generative models,we promote using latent diffusion models to learn compressed latent representations of the distribution of PDE parameters as they act as input parameters for NN functional solutions.We use a two-stage training scheme in which,in the first stage,we learn the latent representations for the distribution of PDE parameters.In the second stage,we train a physics-informed neural network over inputs given by randomly drawn samples from the coordinate space within the solution domain and samples from the learned latent representation of the PDE parameters.Considering their importance in capturing evolving interfaces and fronts in various fields,we test the approach on a class of level set equations given,for example,by the nonlinear Eikonal equation.We share results corresponding to three Eikonal parameters(velocity models)sets.The proposed method performs well on new phase velocity models without the need for any additional training.
基金Project supported by the National Natural Science Foundation of China(Nos.U1836220 and 61672267)the Qing Lan Talent Program of Jiangsu Province,Chinathe Jiangsu Province Key Research and Development Plan(Industry Foresight and Key Core Technology)(No.BE2020036)。
文摘Extracting discriminative speaker-specific representations from speech signals and transforming them into fixed length vectors are key steps in speaker identification and verification systems.In this study,we propose a latent discriminative representation learning method for speaker recognition.We mean that the learned representations in this study are not only discriminative but also relevant.Specifically,we introduce an additional speaker embedded lookup table to explore the relevance between different utterances from the same speaker.Moreover,a reconstruction constraint intended to learn a linear mapping matrix is introduced to make representation discriminative.Experimental results demonstrate that the proposed method outperforms state-of-the-art methods based on the Apollo dataset used in the Fearless Steps Challenge in INTERSPEECH2019 and the TIMIT dataset.