In the past three decades,a wide array of computational methodologies and simulation frameworks have emerged to address the complexities of modeling flow and transport processes in fractured porous media.The conformal...In the past three decades,a wide array of computational methodologies and simulation frameworks have emerged to address the complexities of modeling flow and transport processes in fractured porous media.The conformal mesh approaches which explicitly align the computational grid with fracture surfaces are considered by many to be the most accurate.However,such methods require excessive fine-scale meshing,rendering them impractical for large or complex fracture networks.The Embedded Discrete Fracture Model(EDFM)offers a good balance between accuracy and efficiency and has gained a lot of traction in recent years.Nonetheless,it is not free of drawbacks;EDFM can,and often will,generate fracture cells that have orders of magnitudes smaller volumes than the matrix cells,which significantly impacts the convergence performance of nonlinear solvers.In this work,we propose to learn the complex flow and transport dynamics in fractured porous media with graph neural networks(GNN).GNNs are well suited for this task due to the unstructured topology of the computation grid resulting from EDFM discretization.We propose two deep learning architectures,a GNN and a recurrent GNN.Both networks follow a twostage training strategy:an autoregressive one step roll-out,followed by a finetuning step where the model is supervised using the whole ground-truth sequence.We demonstrate that the two-stage training approach is effective in mitigating error accumulation during autoregressive model rollouts in the testing phase.Our findings indicate that both GNNs generalize well to unseen fracture realizations.While the second stage of training proved to be beneficial for the GNN model,its impact on the recurrent GNN model was less pronounced.Finally,the performance of both GNNs for temporal extrapolation is tested.The recurrent GNN significantly outperformed the GNN in terms of accuracy,thereby underscoring its superior capability in predicting long sequences.展开更多
基金supported by ADNOC(Grant No.8434000476)the National Natural Science Foundation of China(Grant No.52304030)the Natural Science Foundation of Shandong Province,China(Grant No.ZR2023QA034).
文摘In the past three decades,a wide array of computational methodologies and simulation frameworks have emerged to address the complexities of modeling flow and transport processes in fractured porous media.The conformal mesh approaches which explicitly align the computational grid with fracture surfaces are considered by many to be the most accurate.However,such methods require excessive fine-scale meshing,rendering them impractical for large or complex fracture networks.The Embedded Discrete Fracture Model(EDFM)offers a good balance between accuracy and efficiency and has gained a lot of traction in recent years.Nonetheless,it is not free of drawbacks;EDFM can,and often will,generate fracture cells that have orders of magnitudes smaller volumes than the matrix cells,which significantly impacts the convergence performance of nonlinear solvers.In this work,we propose to learn the complex flow and transport dynamics in fractured porous media with graph neural networks(GNN).GNNs are well suited for this task due to the unstructured topology of the computation grid resulting from EDFM discretization.We propose two deep learning architectures,a GNN and a recurrent GNN.Both networks follow a twostage training strategy:an autoregressive one step roll-out,followed by a finetuning step where the model is supervised using the whole ground-truth sequence.We demonstrate that the two-stage training approach is effective in mitigating error accumulation during autoregressive model rollouts in the testing phase.Our findings indicate that both GNNs generalize well to unseen fracture realizations.While the second stage of training proved to be beneficial for the GNN model,its impact on the recurrent GNN model was less pronounced.Finally,the performance of both GNNs for temporal extrapolation is tested.The recurrent GNN significantly outperformed the GNN in terms of accuracy,thereby underscoring its superior capability in predicting long sequences.