FastVPINNs: An efficient tensor-based Python library
for solving partial differential equations using
hp-Variational Physics Informed Neural Networks
1¶ 1 1¶
Thivin Anandh , Divij Ghose , and Sashikumaar Ganesan
1 Department of Computational and Data Sciences, Indian Institute of Science, Bangalore, India ¶
Corresponding author
DOI: 10.21105/joss.06764
Software
• Review Introduction
• Repository
• Archive Partial differential equations (PDEs) are essential in modeling physical phenomena such as heat
transfer, electromagnetics and fluid dynamics. The current progress in the field of scientific
machine learning (SciML) has resulted in incorporating deep learning and data-driven methods
Editor: Patrick Diehl
to solve PDEs. Let us consider a two-dimensional Poisson equation as an example, defined on
the domain Ω, which reads as follows:
Reviewers:
• @pescap
• @ziyiyin97 −∇2 𝑢(𝑥) = 𝑓(𝑥), Ω ∈ ℝ2 , (1)
Submitted: 17 May 2024
𝑢(𝑥) = 𝑔(𝑥), 𝑥 ∈ 𝜕Ω. (2)
Published: 30 July 2024
Here, 𝑥 ∈ Ω is the spatial coordinate, 𝑢(𝑥) is the solution of the PDE, 𝑓(𝑥) is a known source
License term, and 𝑔(𝑥) is the value of the solution on the domain boundary, 𝜕Ω.
Authors of papers retain copyright
and release the work under a A neural network is a parametric function of 𝑥, denoted as 𝑢NN (𝑥; 𝑊 , 𝑏). In this context, 𝑊
Creative Commons Attribution 4.0 and 𝑏 represent the weights and biases of the network. When the neural network consists of ℎ
International License (CC BY 4.0). hidden layers, with the 𝑖th layer containing 𝑛𝑖 neurons, the mathematical representation of the
function takes the following shape:
𝑢NN (𝑥; 𝑊 , 𝑏) = 𝑙 ∘ T(ℎ) ∘ T(ℎ−1) … ∘ T1 (𝑥).
Here, 𝑙 ∶ ℝ𝑛ℎ → ℝ is a linear mapping in the output layer and T(𝑖) (⋅) = 𝜎(𝑊 (𝑖) × ⋅ + 𝑏(𝑖) ) is
a non-linear mapping in the 𝑖𝑡ℎ layer (𝑖 = 1, 2, ⋯ , ℎ), with the non-linear activation function
𝜎 and the weights 𝑊 (𝑖) and biases 𝑏(𝑖) of the respective layers.
Physics-informed neural networks (PINNs), introduced by Raissi et al. (2019), work by
incorporating the governing equations of the physical systems and the boundary conditions
as physics-based constrains to train the neural networks. The core idea of PINNs lies in the
ability of obtaining the gradients of the solutions with respect to the input using the automatic
differention routines available within deep learning frameworks such as TensorFlow (Abadi et
al., 2015). The loss function for the PINNs can be written as follows:
𝑁
𝑇
1 2
𝐿𝑝 (𝑊 , 𝑏) = ∑ ((−∇2 𝑢NN (𝑥𝑡 ; 𝑊 , 𝑏) − 𝑓(𝑥𝑡 ))) ,
𝑁𝑇 𝑡=1
𝐷 𝑁
1
𝐿𝑏 (𝑊 , 𝑏) = ∑ (𝑢NN (𝑥𝑑 ; 𝑊 , 𝑏) − 𝑔(𝑥𝑑 ))2 ,
𝑁𝐷 𝑑=1
𝐿PINN (𝑊 , 𝑏) = 𝐿𝑝 + 𝜏 𝐿𝑏 .
Anandh et al. (2024). FastVPINNs: An efficient tensor-based Python library for solving partial differential equations using hp-Variational Physics 1
Informed Neural Networks. Journal of Open Source Software, 9 (99), 6764. https://doi.org/10.21105/joss.06764.
Here, the output of the neural network, 𝑢NN (𝑥; 𝑊 , 𝑏), is used to approximate the unknown
solution. In addition, 𝑁𝑇 is the total number of training points in the interior of the domain Ω,
𝑁𝐷 is the total number of training points on the boundary 𝜕Ω, 𝜏 is a scaling factor applied to
control the penalty on the boundary term, and 𝐿PINN (𝑊 , 𝑏) is the loss function of the PINNs.
Variational Physics informed neural networks (VPINNs) (Kharazmi et al., 2019) are an extension
of PINNs, where the weak form of the PDE is used in the loss function of the neural network.
The weak form of the PDE is obtained by multiplying the PDE with a test function, integrating
over the domain, and applying integration by parts to the higher order derivative terms. The
method of hp-VPINNs (Kharazmi et al., 2021) was subsequently developed to increase the
accuracy using h-refinement (increasing number of elements) and p-refinement (increasing the
number of test functions). The domain Ω is divided into an array of non-overlapping cells,
labeled as 𝐾𝑘 , where 𝑘 = 1, 2, … , N_elem, ensuring that the complete union ⋃𝑘=1 𝐾𝑘 = Ω
N_elem
covers the entire domain Ω. The hp-VPINNs framework utilizes specific test functions 𝑣𝑘 , where
𝑘 ranges from 1 to N_elem, that are localized and defined within individual non-overlapping
element across the domain.
𝑣𝑝 ≠ 0, over 𝐾𝑘 ,
𝑣𝑘 = {
0, elsewhere.
The loss function of hp-VPINNs with N_elem elements in the domain can be written as follows:
N_elem 2
1
𝐿𝑣 (𝑊 , 𝑏) = ∑ (∫ ∇𝑢NN (𝑥; 𝑊 , 𝑏) ⋅ ∇𝑣𝑘 𝑑𝐾 − ∫ 𝑓 𝑣𝑘 𝑑𝐾 ) ,
N_elem 𝐾𝑘 𝐾𝑘
𝑘=1
𝑁𝐷
1
𝐿𝑏 (𝑊 , 𝑏) = ∑ (𝑢 (𝑥; 𝑊 , 𝑏) − 𝑔(𝑥))2 ,
𝑁𝐷 𝑑=1 NN
𝐿VPINN (𝑊 , 𝑏) = 𝐿𝑣 + 𝜏 𝐿𝑏 .
Where, 𝐾𝑘 is the 𝑘𝑡ℎ element in the domain and 𝑣𝑘 is the test function in the respective
element. Further, 𝐿𝑣 (𝑊 , 𝑏) is the weak form PDE residual and 𝐿VPINN (𝑊 , 𝑏) is the loss
function of the hp-VPINNs. For more information on the derivation of the weak form of the
PDE and the loss function of hp-VPINNs, refer to Anandh et al. (2024) and Ganesan &
Tobiska (2017).
Statement of need
The existing implementation of hp-VPINNs framework (Kharazmi, 2023) suffers from two major
challenges. One is the inabilty of the framework to handle complex geometries and the other
is the increased training time associated with the increase in number of elements within the
domain. In the work Anandh et al. (2024), we presented FastVPINNs, which addresses both
of these challenges. FastVPINNs handles complex geometries by using bilinear transformation,
and it uses a tensor-based loss computation to reduce the dependency of training time on
number of elements. The current implementation of FastVPINNs can acheive an speed-up of
up to 100 times when compared with the existing implementation of hp-VPINNs. We have also
shown that with proper hyperparameter selection, FastVPINNs can outperform PINNs both in
terms of accuracy and training time, especially for problems with high frequency solutions.
In this work, we present the Python based implementation of the novel FastVPINNs framework
which is built using TensorFlow-v2.0 (Abadi et al., 2015). FastVPINNs provides an elegant
API for users to solve both forward and inverse problems for PDEs like the Poisson, Helmholtz,
and Convection-Diffusion equations. With the current level of API abstraction, users should
be able to solve PDEs with less than six API calls as shown in the minimal working example
section. The framework is well-documented with examples, which can enable users to get
started with the framework with ease. As per the authors’ knowledge, FastVPINNs is the first
Anandh et al. (2024). FastVPINNs: An efficient tensor-based Python library for solving partial differential equations using hp-Variational Physics 2
Informed Neural Networks. Journal of Open Source Software, 9 (99), 6764. https://doi.org/10.21105/joss.06764.
open-source implementation of the hp-VPINNs framework with tensor-based loss computation
and support for complex geometries.
The ability of the framework to allow users to train a hp-VPINNs to solve a PDE both faster
and with minimal code, can result in widespread application of this method on several real-world
problems, which often require complex geometries with a large number of elements within the
domain.
Modules
The FastVPINNs framework consists of five core modules, which are Geometry, FE, Data,
Physics, and Model.
Figure 1: FastVPINNs Modules
Geometry Module
This module provides the functionality to define the geometry of the domain. The user can
either generate a quadrilateral mesh internally or read an external .mesh file. The module also
provides the functionality to obtain boundary points for complex geometries.
FE Module
The FE module is responsible for handling the finite element test functions and their gradients.
The module’s functionality can be broadly classified into into four categories:
• Test functions: Provides the test function values and its gradients for a given reference
element. In our framework, we use Lagrange polynomials as the test functions.
• Quadrature functions: Provides the quadrature points and weights for a given element
based on the quadrature order and the quadrature method selected by the user. This
method will be used for performing the numerical integration of the weak form residual
of the PDE.
Anandh et al. (2024). FastVPINNs: An efficient tensor-based Python library for solving partial differential equations using hp-Variational Physics 3
Informed Neural Networks. Journal of Open Source Software, 9 (99), 6764. https://doi.org/10.21105/joss.06764.
• Transformation functions: Provides the implementation of transformation routines such
as affine transformation and bilinear transformation. This can be used to transform the
test function values and gradients from the reference element to the actual element.
• Finite Element Setup: Provides the functionality to set up the test functions, quadrature
rules and the transformation for every element and save them in a common class to
access them. Further, it also hosts functions to plot the mesh with quadrature points,
assign boundary values based on the boundary points obtained from the geometry module
and calculate the forcing term in the residual.
Remark: The module is named FE (Finite Element) Module because of its similarities with
classical FEM routines, such as test functions, numerical quadratures, and transformations.
However, we would like to state that our framework is not an FEM solver, but an hp-VPINNs
solver.
Data Module:
The Data module collects data from all modules that are required for training and converts them
to a tensor data type with the user specified precision (for example, tf.float32 or tf.float64).
It also assembles the test function values and gradients to form a three-dimensional tensor,
which will be used during the loss computation.
Physics Module:
This module contains functions that are used to compute the variational loss function for
different PDEs. These functions accept the test function tensors and the predicted gradients
from the neural network along with the forcing matrix to compute the PDE residuals using
tensor-based operations.
Model Module:
This module contains custom subclasses of the tensorflow.keras.Model class, which are
used to train the neural network. The module provides the functionality to train the neural
network using the tensor based loss computation and also provides the functionality to predict
the solution of the PDE for a given input.
Minimal Working Example
With the higher level of abstraction provided by the FastVPINNs framework, users can solve a
PDE with just six API calls. A Minimal working example to solve the Poisson equation using
the FastVPINNs framework can be found here. The example files with detailed documentation
can be found in the Tutorials section of the documentation.
Testing
The FastVPINNs framework has a strong testing suite, which tests the core functionalities of
the framework. The testing suite is built using the pytest framework and is integrated with
the continuous integration pipeline provided by Github Actions. The testing functionalites can
be broadly classified into the three categories:
• Unit Testing: Covers the testing of individual modules and their functionalities.
• Integration Testing: Covers the overall flow of the framework. Different PDEs are solved
with different parameters to check if the accuracy after training is within the acceptable
limits. This ensures that the collection of modules work together as expected.
Anandh et al. (2024). FastVPINNs: An efficient tensor-based Python library for solving partial differential equations using hp-Variational Physics 4
Informed Neural Networks. Journal of Open Source Software, 9 (99), 6764. https://doi.org/10.21105/joss.06764.
• Compatibility Testing: The framework is tested with different versions of Python such
as 3.8, 3.9, 3.10, 3.11 and on different versions of OS such as Ubuntu, MacOS, and
Windows to ensure the compatibility of the framework across different platforms.
Acknowledgements
We thank Shell Research, India, for providing partial funding for this project. We are thankful
to the MHRD Grant No.STARS-1/388 (SPADE) for partial support.
References
Abadi, M., Agarwal, A., Barham, P., Brevdo, E., Chen, Z., Citro, C., Corrado, G. S., Davis,
A., Dean, J., Devin, M., Ghemawat, S., Goodfellow, I., Harp, A., Irving, G., Isard, M., Jia,
Y., Jozefowicz, R., Kaiser, L., Kudlur, M., … Zheng, X. (2015). TensorFlow: Large-scale
machine learning on heterogeneous systems. https://doi.org/10.48550/arXiv.1603.04467
Anandh, T., Ghose, D., Jain, H., & Ganesan, S. (2024). FastVPINNs: Tensor-driven accelera-
tion of VPINNs for complex geometries. https://doi.org/10.48550/arXiv.2404.12063
Ganesan, S., & Tobiska, L. (2017). Finite elements: Theory and algorithms. Cambridge
University Press. https://doi.org/10.1017/9781108235013
Kharazmi, E. (2023). hp-VPINNs: High-performance variational physics-informed neural
networks. https://github.com/ehsankharazmi/hp-VPINNs
Kharazmi, E., Zhang, Z., & Karniadakis, G. E. (2019). Variational physics-informed neural
networks for solving partial differential equations. arXiv Preprint arXiv:1912.00873. https:
//doi.org/10.48550/arXiv.1912.00873
Kharazmi, E., Zhang, Z., & Karniadakis, G. E. (2021). hp-VPINNs: Variational physics-
informed neural networks with domain decomposition. Computer Methods in Applied
Mechanics and Engineering, 374, 113547. https://doi.org/10.1016/j.cma.2020.113547
Raissi, M., Perdikaris, P., & Karniadakis, G. E. (2019). Physics-informed neural networks:
A deep learning framework for solving forward and inverse problems involving nonlinear
partial differential equations. Journal of Computational Physics, 378, 686–707. https:
//doi.org/10.1016/j.jcp.2018.10.045
Anandh et al. (2024). FastVPINNs: An efficient tensor-based Python library for solving partial differential equations using hp-Variational Physics 5
Informed Neural Networks. Journal of Open Source Software, 9 (99), 6764. https://doi.org/10.21105/joss.06764.