0% found this document useful (0 votes)
62 views10 pages

Sciadv Abi8605

Uploaded by

589rmdv2bw
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
62 views10 pages

Sciadv Abi8605

Uploaded by

589rmdv2bw
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

SCIENCE ADVANCES | RESEARCH ARTICLE

APPLIED PHYSICS Copyright © 2021


The Authors, some
Learning the solution operator of parametric partial rights reserved;
exclusive licensee
differential equations with physics-informed DeepONets American Association
for the Advancement
of Science. No claim to
Sifan Wang1, Hanwen Wang1, Paris Perdikaris2* original U.S. Government
Works. Distributed
Partial differential equations (PDEs) play a central role in the mathematical analysis and modeling of complex under a Creative
dynamic processes across all corners of science and engineering. Their solution often requires laborious analytical Commons Attribution
or computational tools, associated with a cost that is markedly amplified when different scenarios need to be License 4.0 (CC BY).
investigated, for example, corresponding to different initial or boundary conditions, different inputs, etc. In this
work, we introduce physics-informed DeepONets, a deep learning framework for learning the solution operator
of arbitrary PDEs, even in the absence of any paired input-output training data. We illustrate the effectiveness of
the proposed framework in rapidly predicting the solution of various types of parametric PDEs up to three orders
of magnitude faster compared to conventional PDE solvers, setting a previously unexplored paradigm for modeling
and simulation of nonlinear and nonequilibrium processes in science and engineering.

INTRODUCTION parametric PDEs as an integral Hilbert-Schmidt operator, whose

Downloaded from https://www.science.org on December 25, 2023


As machine learning (ML) methodologies take center stage across kernel is parametrized and learned from paired observations, either
diverse disciplines in science and engineering, there is an increased using local message passing on a graph-based discretization of the
interest in adopting data-driven methods to analyze, emulate, and physical domain (32, 33) or using global Fourier approximations
optimize complex physical systems. The dynamic behavior of such in the frequency domain (34). By construction, neural operators
systems is often described by conservation and constitutive laws methods are resolution independent (i.e., the model can be queried
expressed as systems of partial differential equations (PDEs) (1). A at any arbitrary input location), but they require large training data-
classical task then involves the use of analytical or computational sets, while their involved implementation often leads to slow and
tools to solve such equations across a range of scenarios, e.g., differ- computationally expensive training loops. More recently, Lu et al.
ent domain geometries, input parameters, and initial and boundary (35) has presented a novel operator learning architecture coined as
conditions (IBCs). Mathematically speaking, solving these so-called DeepONet that is motivated by the universal approximation theorem
parametric PDE problems involves learning the solution operator for operators (36, 37). DeepONets still require large annotated data-
that maps variable input entities to the corresponding latent solu- sets consisting of paired input-output observations, but they provide
tions of the underlying PDE system. Tackling this task using tradi- a simple and intuitive model architecture that is fast to train, while
tional tools [e.g., finite element methods (2)] bears a formidable cost, allowing for a continuous representation of the target output func-
as independent simulations need to be performed for every differ- tions that is independent of resolution. Beyond deep learning ap-
ent domain geometry, input parameter, or IBCs. This challenge has proaches, operator-valued kernel methods (38, 39) have also been
motivated a growing literature on reduced-order methods (3–9) that demonstrated as a powerful tool for learning nonlinear operators,
leverage existing datasets to build fast emulators, often at the price and they can naturally be generalized to neural networks acting on
of reduced accuracy, stability, and generalization performance (10, 11). function spaces (40), but their applicability is generally limited due
More recently, ML tools are actively developed to infer solutions of to their computational cost. Here, we should again stress that the
PDEs (12–18); however, most existing tools can only accommodate aforementioned techniques enable inference in abstract infinite-­
a fixed given set of input parameters or IBCs. Nevertheless, these dimensional Banach spaces (41), a paradigm shift from current ML
approaches have found wide applicability across diverse applications practice that mainly focuses on learning functions instead of opera-
including fluid mechanics (19, 20), heat transfer (21, 22), bioengi- tors. Recent theoretical findings also suggest that the sample com-
neering (23, 24), materials (25–28), and finance (29, 30), showcasing plexity of deep neural networks (31, 42, 43), and DeepONets in
the remarkable effectiveness of ML techniques in learning black box particular (44), can circumvent the curse of dimensionality in cer-
functions, even in high-dimensional contexts (31). A natural ques- tain scenarios.
tion then arises: Can ML methods be effective in building fast emu- While the aforementioned methodologies have demonstrated
lators for solving parametric PDEs? early promise across a range of applications (45–49), their applica-
Solving parametric PDEs requires learning operators (i.e., maps tion to solving parametric PDEs faces two fundamental challenges.
between infinite dimensional function spaces) instead of functions First, they require a large corpus of paired input-output observations.
(i.e., maps between finite dimensional vector spaces), thus defining In many realistic scenarios, the acquisition of such data involves the
a new and relatively under explored realm for ML-based approaches. repeated evaluation of expensive experiments or costly high-fidelity
Neural operator methods (32–34) represent the solution map of simulators, thus generating sufficient large training datasets that may
be prohibitively expensive. Ideally, one would wish to be able to
1
train such models without any observed data at all (i.e., given only
Graduate Group in Applied Mathematics and Computational Science, University of knowledge of the PDE form and its corresponding IBCs). The sec-
Pennsylvania, Philadelphia, PA 19104, USA. 2Department of Mechanical Engineering
and Applied Mechanics, University of Pennsylvania, Philadelphia, PA 19104, USA. ond challenge relates to the fact that, by construction, the methods
*Corresponding author. Email: [email protected] outlined above can only return a crude approximation to the target

Wang et al., Sci. Adv. 2021; 7 : eabi8605 29 September 2021 1 of 9


SCIENCE ADVANCES | RESEARCH ARTICLE

solution operator in the sense that the predicted output functions nonlinear, nonequilibrium processes across diverse applications in-
are not guaranteed to satisfy the underlying PDE. Recent efforts cluding engineering design and control, Earth System science, and
(16, 50–53) attempt to address some of these challenges by design- computational biology.
ing appropriate architectures and loss functions for learning dis-
cretized operators (i.e., maps between high-dimensional Euclidean
spaces). Although these approaches can relax the requirement for RESULTS
paired input-output training data, they are limited by the resolution of The proposed physics-informed DeepONet architecture is summa-
their underlying mesh discretization and, consequently, need modi- rized in Fig. 1. Motivated by the universal approximation theorem
fications to their architecture for different resolutions/discretizations for operators (35, 36), the architecture features two neural networks
to achieve consistent convergence [if at all possible, as demonstrated coined as the “branch” and “trunk” networks, respectively; the
in (32)]. automatic differentiation of which enables us to learn the solution
In this work, we aim to address the aforementioned challenges operator of arbitrary PDEs. The associated loss functions, perform­
by exploring a simple yet remarkably effective extension of the ance metrics, computational cost, hyperparameters, and training
DeepONet framework (35). Drawing motivation from physics-­ details are discussed in the Supplementary Materials. In the following,
informed neural networks (14), we recognize that the outputs of a we demonstrate the effectiveness of physics-informed DeepONets
DeepONet model are differentiable with respect to their input co- across a series of comprehensive numerical studies for solving various
ordinates, therefore allowing us to use automatic differentiation types of parametric PDEs. A summary of the different benchmarks
(54, 55) to formulate an appropriate regularization mechanism for considered is presented in Table 1. It is worth emphasizing that, in
biasing the target output functions to satisfy the underlying PDE all cases, the proposed deep learning models are trained without any
constraints. This yields a simple procedure for training physics-­ paired input-output data, assuming only knowledge of the governing

Downloaded from https://www.science.org on December 25, 2023


informed DeepONet models even in the absence of any training equation and its corresponding initial or boundary conditions.
data for the latent output functions, except for the appropriate IBCs
of a given PDE system. By constraining the outputs of a DeepONet Solving a parametric ordinary differential equation
to approximately satisfy an underlying governing law, we observe We begin with a pedagogical example involving the antiderivative
substantial improvements in predictive accuracy (up to one to two operator. The underlying governing law corresponds to an initial
orders of magnitude reduction in predictive errors), enhanced value problem described by the following ordinary differential
generalization performance even for out-of-distribution prediction equation (ODE)
and extrapolation tasks, as well as enhanced data efficiency (up to
100% reduction in the number of examples required to train a ds(x)
​​ ─  ​  = u(x ), x ∈ [0, 1]​ (1)
DeepONet model). Hence, we demonstrate how physics-informed dx
DeepONet models can be used to solve parametric PDEs without
any paired input-output observations, a setting for which existing ​
s(0) = 0​ (2)
approaches for operator learning in Banach spaces fall short. More-
over, a trained physics-informed DeepONet model can generate Here, we aim to learn the solution operator mapping any forcing
PDE solutions up to three orders of magnitude faster compared to term u(x) to the ODE solution s(x) using a physics-informed DeepONet.
traditional PDE solvers. Together, the computational infrastructure The model is trained on random realizations of u(x) generated by
developed in this work can have broad technical impact in reducing sampling a Gaussian random field (GRF) as detailed in the Supple-
computational costs and accelerating scientific modeling of complex mentary Materials, while prediction accuracy is measured in new

DeepONet
Branch net PDE

Minimize
Losss

Trunk net BC & IC

Fig. 1. Making DeepONets physics informed. The DeepONet architecture (35) consists of two subnetworks, the branch net for extracting latent representations of input
functions and the trunk net for extracting latent representations of input coordinates at which the output functions are evaluated. A continuous and differentiable rep-
resentation of the output functions is then obtained by merging the latent representations extracted by each subnetwork via a dot product. Automatic differentiation
can then be used to formulate appropriate regularization mechanisms for biasing the DeepONet outputs to satisfy a given system of PDEs. BC, boundary conditions;
IC, initial conditions.

Wang et al., Sci. Adv. 2021; 7 : eabi8605 29 September 2021 2 of 9


SCIENCE ADVANCES | RESEARCH ARTICLE

solutions s(x). The model is trained on random realizations of u(x)


Table 1. Summary of benchmarks for assessing the performance of generated by sampling a GRF as detailed in the Supplementary
physics-informed DeepONets across various types of parametric Materials, while prediction accuracy is measured in new unseen
differential equations. The reported test error corresponds to the realizations that are not used during model training.
relative L2 prediction error of the trained model, averaged over all
examples in the test dataset (see eq. S20).
The top panels of Fig. 3 show the comparison between the pre-
dicted and the exact solution for a random test input sample. More
Governing law Equation form Random input Test error
visualizations for different input samples can be found in the Sup-
Linear ODE ds(x)
_
​​ ​  = u(x)​ Forcing terms 0.33 ± 0.32% plementary Materials (fig. S12). We observe that the physics-informed
dx
DeepONet predictions achieve an excellent agreement with the cor-
Diffusion
Source terms 0.45 ± 0.16%
2

reaction ​​_
∂s
∂t
​  = D ​_
​∂​​ ​  s 2
2​  + k ​s​​ ​  + u(x)​
∂ ​x​​ ​
responding reference solutions. Furthermore, we provide a com-
parison against the conventional DeepONet formulation recently put
Initial
Burgers’ ​​_
∂s
∂t
​  + s ​_
∂s
∂x
​  − ν ​_
2
​∂​​ ​  s
2 ​  = 0​ conditions
1.38 ± 1.64% forth by Lu et al. (35). This case necessitates observations of paired
​x​​ ​
input-output pairs [u(x), s(x, t)] to be provided as training data, as
Variable
Advection _
∂s
​​ ​ 
∂t
+ u ​_
∂s
∂x
​ = 0​ coefficients
2.24 ± 0.68% no physical constraints are leveraged during model training. The
mean and SD of relative L2 errors of the conventional DeepONet
Domain
Eikonal ∥ ∇ s∥2 = 0 0.42 ± 0.11% and physics-informed DeepONet over the test dataset are visualized
geometries
in the bottom panel of Fig. 3. The average relative L2 error of
DeepONet and physics-informed DeepONet are ∼1.92 and ∼0.45%,
respectively. In contrast to the conventional DeepONet that is trained
unseen realizations that are not used during model training. Results on paired input-output measurements, the proposed physics-informed

Downloaded from https://www.science.org on December 25, 2023


for one representative input sample u(x) from the test dataset are DeepONet can yield much more accurate predictions even without
presented in Fig. 2. It is evident that an excellent agreement can be any paired training data (except for the specified IBCs). In our ex-
achieved between the physics-informed DeepONet predictions and perience, predictive accuracy can be generally improved by using a
the ground truth. More impressively, below, we show that physics-­ larger batch size during training. A study of the effect of batch size
informed DeepONets can also accommodate irregular input func- for training physics-informed DeepONets can be found in the Sup-
tions by using an appropriate neural network architecture, such as a plementary Materials (figs. S13 and S16). A series of convergence
Fourier features network (56) for their trunk. As shown in Fig. 2, studies aiming to illustrate how predictive accuracy is affected by
the predicted solutions s(x) and their corresponding ODE residuals the number of input sensor locations m and different neural net-
u(x) obtained by a physics-informed DeepONet with a Fourier feature work architectures is also presented in the Supplementary Materials
trunk network are in excellent agreement with the exact solutions (fig. S14).
for this benchmark. Additional systematic studies and visualizations
are provided in the Supplementary Materials (see figs. S1 to S11 and Burgers’ transport dynamics
tables S6 to S10). On the basis of these observations, we may also To highlight the ability of the proposed framework to handle non-
conclude that physics-informed DeepONets can be regarded as a linearity in the governing PDEs, we consider the one-dimensional
class of deep learning models that greatly enhance and generalize the (1D) Burgers’ benchmark investigated in Li et al. (34)
capabilities of physics-informed neural networks (57), which are
limited to solving ODEs and PDEs for a given set of input parameters ​​ ─ ​d​​  2​  s  ​ = 0, (x, t) ∈ (0, 1) × (0, 1]​
ds  ​  − ν ​ ─
ds ​  + s ​ ─ (4)
that remain fixed during both the training and prediction phases dt dx d ​x​​  2​
(see tables S7 and S8 for a more detailed comparison).
It is also worth pointing out that the trained physics-informed ​
s(x, 0 ) = u(x), x ∈ (0, 1)​ (5)
DeepONet is even capable of yielding accurate predictions for
out-of-distribution test data. To illustrate this, we create a test dataset
by sampling input functions from a GRF with a larger length scale with periodic boundary conditions
of l = 0.2 (recall that the training data for this case is generated using
l = 0.01). The corresponding relative L2 prediction error averaged ​
s(0, t) = s(1, t)​ (6)
over 1000 test examples is measured as 0.7%. Additional visualiza-
tions of the model predictions for this out-of-distribution prediction ds  ​ (0, t) = ​ ─
ds  ​ (1, t)​
task can be found in the Supplementary Materials (fig. S9). ​​ ─ (7)
dx dx
Diffusion-reaction dynamics where t ∈ (0,1), the viscosity is set to  = 0.01, and the initial con-
Our next example involves an implicit operator described by a non- dition u(x) is generated from a GRF ∼𝒩(0,252(− + 52I)−4), satis-
linear diffusion-reaction PDE with a source term u(x) fying the periodic boundary conditions.
Our goal here is to use the proposed physics-informed DeepONet
​∂​​  2​  s  ​  + k ​s​​  2​  + u(x) , (x, t) ∈ (0, 1] × (0, 1]​
∂ s ​  = D ​ ─
​​ ─ (3) model to learn the solution operator mapping initial conditions u(x)
∂t ∂ ​x​​  2​ to the full spatiotemporal solution s(x, t) of the 1D Burgers’ equa-
tion. To this end, the model is trained on random realizations of
assuming zero IBCs, while D = 0.01 is the diffusion coefficient and u(x) generated by sampling a GRF as detailed in the Supplementary
k = 0.01 is the reaction rate. Here, we aim to learn the solution Materials, while prediction accuracy is measured in new unseen
operator for mapping source terms u(x) to the corresponding PDE realizations that are not used during model training.

Wang et al., Sci. Adv. 2021; 7 : eabi8605 29 September 2021 3 of 9


SCIENCE ADVANCES | RESEARCH ARTICLE

Downloaded from https://www.science.org on December 25, 2023


Fig. 2. Solving a one-dimensional parametric ODE. (A and B) Exact solution and residual versus the predictions of a trained physics-informed DeepONet for a representative
input function sampled from a GRF with length scale l = 0.2. (C and D) Exact solutions and corresponding ODE residuals versus the predictions of a trained physics-­
informed DeepONet with Fourier feature embeddings (56) for a representative input function sampled from a GRF with length scale l = 0.01. The predicted residual u(x)
is computed via automatic differentiation (55).

Fig. 3. Solving a parametric diffusion-reaction system. (Top) Exact solution versus the prediction of a trained physics-informed DeepONet for a representative exam-
ple in the test dataset. (Bottom) Mean and SD of the relative L2 prediction error of a trained DeepONet (with paired input-output training data) and a physics-informed
DeepONet (without paired input-output training data) over 1000 examples in the test dataset. The mean and SD of the relative L2 prediction are ∼1.92 ± 1.12% (DeepONet)
and ∼0.45 ± 0.16% (physics-informed DeepONet), respectively. The physics-informed DeepONet yields ∼80% improvement in prediction accuracy with 100% reduction
in the dataset size required for training. Tanh, hyperbolic tangent; ReLU, rectified linear unit.

The average relative L2 error of the best trained model is ∼1.38% to  = 0.1 and requires training on a large corpus of paired input-­
(see figs. S17 to S19). The physics-informed DeepONet achieves the output data. Furthermore, visualizations corresponding to the worst
comparable accuracy compared to Fourier operator methods (34), example in the test dataset are shown in the top panels of Fig. 4.
albeit the latter has been only tested for a simpler case corresponding One can see that the predicted solution achieves a good agreement

Wang et al., Sci. Adv. 2021; 7 : eabi8605 29 September 2021 4 of 9


SCIENCE ADVANCES | RESEARCH ARTICLE

Downloaded from https://www.science.org on December 25, 2023


Fig. 4. Solving a parametric Burgers’ equation. (Top) Exact solution versus the prediction of the best-trained physics-informed DeepONet. The resulting relative L2
error of the predicted solution is 3%. (Bottom) Computational cost (s) for performing inference with a trained physics-informed DeepONet model [conventional or modified
multilayer perceptron (MLP) architecture], as well as corresponding timing for solving a PDE with a conventional spectral solver (58). Notably, a trained physics-informed
DeepONet model can predict the solution of 𝒪(103) time-dependent PDEs in a fraction of a second, up to three orders of magnitude faster compared to a conventional
PDE solver. Reported timings are obtained on a single NVIDIA V100 graphics processing unit (GPU).

against the reference solution, with a the relative L2 error of 3.30%. with the IBC
Here, we must also emphasize that a trained physics-informed

s(x, 0 ) = f(x)​ (9)
DeepONet model can rapidly predict the entire spatiotemporal solu-
tion of the Burgers equation in ∼10 ms. Inference with physics-­
informed DeepONets is trivially parallelizable, allowing for the ​
s(0, t ) = g(t)​ (10)
solution of 𝒪(103) PDEs in a fraction of a second, yielding up to
three orders of magnitude in speed up compared to a conventional where f(x) = sin (x) and ​​g(t ) = sin (​ ​​ ​π_2 ​  t​)​​​​. To make the input
spectral solver (58), see the bottom panel of Fig. 4. function u(x) strictly positive, we let ​u(x ) = v(x ) − ​min​  x​​  v(x ) + 1​,
Despite the promising results presented here, we must note the where v(x) is sampled from a GRF with a length scale l = 0.2. The
need for further methodological advances toward enhancing the ac- goal is to learn the solution operator G mapping variable coefficients
curacy and robustness of physics-informed DeepONets in tackling u(x) to associated solutions s(x, t) (see the Supplementary Materials
PDE systems with stiff, turbulent, or chaotic dynamics. For example, for more details).
we have observed that the predictive accuracy of physics-informed As shown in Fig. 5, the trained physics-informed DeepONet is
DeepONets degrades in regions where the PDE solution exhibits able to achieve an overall good agreement with the reference PDE
steep gradients; a behavior that is pronounced as the viscosity solution, although some inaccuracies can be observed in regions
parameter in the Burgers equation is further decreased (see fig. S20 where the solution exhibits steep gradients (similarly to the Burgers’
and table S11 for more details and quantitative results). We conjec- example discussed above; see additional visualizations presented in
ture that these issues can be tackled in the future by designing of fig. S21). The resulting relative L2 prediction averaged over all
more specialized architectures that are tailored to the dynamic be- examples in the test dataset is 2.24%, leading to the conclusion
havior of a given PDE, as well as more effective optimization algo- that physics-informed DeepONets can be effective surrogates for
rithms for training. advection-dominated PDEs.

Advection equation Eikonal equation


This example aims to investigate the performance of physics-informed Our last example aims to highlight the capability of the proposed
DeepONets for tackling advection-dominated PDEs; a setting for physics-informed DeepONet to handle different types of input
which conventional approaches to reduced-order modeling faces functions. To this end, let us consider a 2D eikonal equation
significant challenges (7, 10, 11). To this end, we consider a linear of the form
advection equation with variable coefficients

∂ s ​ = 0, (x, t ) ∈ (0, 1) × (0, 1)​


∂ s ​  + u(x) ​ ─ ∥∇ s(x ) ​∥​  2​​  = 1
​​ ─ (8) ​​  ​​​ (11)
∂t ∂x s(x ) = 0, x ∈ ∂ Ω

Wang et al., Sci. Adv. 2021; 7 : eabi8605 29 September 2021 5 of 9


SCIENCE ADVANCES | RESEARCH ARTICLE

where x = (x, y) ∈ ℝ2 denotes 2D spatial coordinates, and  is an learned signed distance function and compare it with the exact air-
open domain with a piece-wise smooth boundary ∂. A solution to foil geometry. As shown in Fig. 6, the zero-level sets achieve a good
the above equation is a signed distance function measuring the dis- agreement with the exact airfoil geometries. One may conclude that
tance of a point in  to the closest point on the boundary ∂, i.e. the proposed framework is capable of achieving an accurate approxi-
mation of the exact signed distance function. Additional systematic

{− d(x, ∂ Ω) if x ∈ ​
d(x, ∂ Ω) if x ∈ Ω studies and quantitative comparisons are provided in the Supple-
​​s(x) = ​ ​​​   ​  ​  ​ ​​​
Ω​​  c​ mentary Materials (see figs. S23 to S25).

where d( · , · ) is a distance function defined as


DISCUSSION
d(x, ∂ Ω) ≔ ​ inf​​​  d(x, y)​
​ (12) This paper presents physics-informed DeepONets, a novel deep
y∈∂Ω
learning framework for approximating nonlinear operators in
Signed distance functions (SDFs) have recently sparked increased infinite-dimensional Banach spaces. Leveraging automatic differ-
interest in the computer vision and graphics communities as a tool entiation, we present a simple yet remarkably effective mechanism
for shape representation learning (59). This is because SDFs can for biasing the outputs of DeepONets toward physically consistent
continuously represent abstract shapes or surfaces implicitly as their predictions, allowing us to realize significant improvements in pre-
zero-level set, yielding high-quality shape representations, inter- dictive accuracy, generalization performance, and data efficiency
polation, and completion from partial and noisy input data (59). In compared to existing operator learning techniques. An even more
this example, we seek to learn the solution map from a well-behaved intriguing finding is that physics-informed DeepONets can learn the
closed curve  to its associated signed distance function, i.e., the solution solution operator of parametric ODEs and PDEs, even in the ab-

Downloaded from https://www.science.org on December 25, 2023


of the eikonal equation defined in Eq. 11. As a benchmark we consider sence of any paired input-output training data. This capability is
different airfoil geometries from the University of Illinois--Urbana-­ introducing a new radical way of simulating nonlinear and non-
Champaign (UIUC) database (60), a subset of which is used to train equilibrium phenomena across different applications in science and
the model (see the Supplementary Materials for more details). engineering up to three orders of magnitude faster compared to
The trained DeepONet model is then capable of predicting the conventional solvers.
solution of the eikonal equation for any given input airfoil geometry. Given the prominent role that PDEs play in the mathematical
To evaluate its performance, we visualize the zero-level set of the analysis, modeling, and simulation of complex physical systems, the

Fig. 5. Solving a parametric advection equation. Exact solution versus the prediction of a trained physics-informed DeepONet for a representative example in the
test dataset.

Fig. 6. Solving a parametric eikonal equation (airfoils). (Top) Exact airfoil geometry versus the zero-level set obtained from the predicted signed distance function for
three different input examples in the test dataset. (Bottom) Predicted signed distance function of a trained physics-informed DeepONet for three different airfoil geometries
in the test dataset.

Wang et al., Sci. Adv. 2021; 7 : eabi8605 29 September 2021 6 of 9


SCIENCE ADVANCES | RESEARCH ARTICLE

physics-informed DeepONet architecture can be broadly applied in [b1, b2, …, bq]T ∈ ℝq as output, where u = [u(x1), u(x2), …, u(xm)]
science and engineering since PDEs are prevalent across diverse represents a function u ∈ 𝒰 evaluated at a collection of fixed loca-
problem domains including fluid mechanics, electromagnetics, quan- tions ​​{​x​  i​​}m
​ ​​.​ The trunk network takes the continuous coordinates y
i=1
tum mechanics, and elasticity. However, despite the early promise as inputs and outputs a features embedding [t1, t2, …, tq]T ∈ ℝq. To
demonstrated here, numerous technical questions remain open and obtain the final output of the DeepONet, the outputs of the branch
require further investigation. Motivated by the successful application and trunk networks are merged together via a dot product. More
of Fourier feature networks (56), it is natural to ask the following: specifically, a DeepONet G prediction of a function u evaluated at
For a given parametric governing law, what is the optimal features y can be expressed by
embedding or network architecture of a physics-informed DeepONet? q


Recently, Wang et al. (61) proposed a multiscale Fourier feature ​​G​ θ​​(u) (y) = ​ ∑
​​ ​​​​  b​ k​​(u(​x​  1​​ ) , u(​
   x​  2​​ ) ,​ ​ ​​​t​ 
   … , u(​x​  m​​  ) )​​ k​​(y)​ ​​ ​​ (15)
network to tackle PDEs with multiscale behavior. Such an architec- k=1 branch trunk
ture may be potentially used as the backbone of physics-informed
DeepONets to learn multiscale operators and solve multiscale para- where  denotes the collection of all trainable weight and bias
metric PDEs. Another question arises from the possibility of achieving parameters in the branch and trunk networks.
improved performance by assigning weights in the physics-informed Notice that the outputs of a DeepONet model are continuously
DeepONet loss function. It has been shown that these weights play differentiable with respect to their input coordinates. Therefore,
an important role in enhancing the trainability of constrained neural one may use automatic differentiation (54, 55) to formulate an
networks (62–64). Therefore, it is natural to ask the following: What appropriate regularization mechanism for biasing the target output
are the appropriate weights to use for training physics-informed functions to satisfy any given differential constraints.
DeepONets? How to design effective algorithms for accelerating Consequently, we may then construct a “physics-informed”

Downloaded from https://www.science.org on December 25, 2023


training and ensuring accuracy and robustness in the predicted DeepONet by formulating the following loss function
outputs? We believe that addressing these questions will not only
enhance the performance of physics-informed DeepONets but also ℒ(θ) = ​ℒ​  operator​​(θ) + ​ℒ​  physics​​(θ)​
​ (16)
introduce a paradigm shift in how we model and simulate complex,
nonlinear, and multiscale physical systems across diverse applica- where
tions in science and engineering.
N P 2
​​ℒ​  operator​​(θ ) = ​ ─1  ​ ​ ∑​ ​​​ ∑​ ​​ ​∣ ​G​  ​​(​u​​  (i)​ ) (​y(i)
​ ,j ​ ​  ) − G(​u​​  (i)​ ) (​y(i)
​ ,j ​ ​  )∣​​  ​​ (17)
NP i=1j=1 θ u u

METHODS
DeepONets (35) present a specialized deep learning architecture that N Q m 2
encapsulates the universal approximation theorem for operators (36). ​​ℒ​  physics​​(θ ) = ​ ─1  ​ ​ ∑​ ​​​ ∑​ ​​​  ∑ ​​​ ​∣𝒩(​u​​  (i)​(​x​  ​​  ) , ​G​  ​​(​u​​  (i)​ ) (​y(i)
k θ r​ ,j​  ​  ) ) ∣​​  ​​ (18)
NQm i=1j=1k=1
Here, we illustrate how DeepONets can be effectively applied to
learning the solution operator of parametric PDEs. Here, the termi- N
nology “parametric PDEs” refers to the fact that some parameters of Here, ​​{​u​​  (i)​}i=1
​ ​​​ denotes N separate input functions sampled from 𝒰.
a given PDE system are allowed to vary over a certain range. These P
For each u(i), ​​{​y​(i)
u,j ​​ }j=1
​ ​​​ are P locations that are determined by the data
input parameters may include, but are not limited to, the shape of Q
the physical domain, the initial or boundary conditions, constant or observations, initial or boundary conditions, etc. Besides, ​​{​y(i) r​ ,j​  ​}j=1
​ ​​​ is
variable coefficients (e.g., diffusion or reaction rates), source terms, a set of collocation points that can be randomly sampled in the do-
etc. To describe such problems in their full generality, let (𝒰, 𝒱, 𝒮) main of G(u(i)). As a consequence, ℒoperator() fits the available solu-
be a triplet of Banach spaces and 𝒩 : 𝒰 × 𝒮 → 𝒱 be a linear or non- tion measurements while ℒphysics() enforces the underlying PDE
linear differential operator. We consider general parametric PDEs constraints. Contrary to the fixed sensor locations of ​​{​x​  i​​}m
​ ​​,​ we re-
i=1
taking the form P Q
mark that the locations of ​​{​y​(i) ​ ​​​ and ​​{​y(i)
u,j ​​ }j=1 r​ ,j​  ​}j=1
​ ​​​may vary for different
𝒩(u, s ) = 0​
​ (13) i, thus allowing us to construct a continuous representation of the
output functions s ∈ 𝒮. More details on how this general framework
where u ∈ 𝒰 denotes the parameters (i.e., input functions) and s ∈ can be adapted the different PDE systems presented in Results—
𝒮 denotes the corresponding unknown solutions of the PDE system. including the choice of neural network architectures, formulation
Specifically, we assume that, for any u ∈ 𝒰, there exists an unique of loss functions, and training details—are provided in the Supple-
solution s = s(u) ∈ 𝒮 to 13 (subject to appropriate IBCs). Then, we mentary Materials.
can define the solution operator G : 𝒰 → 𝒮 as
SUPPLEMENTARY MATERIALS
G(u) = s(u)​
​ (14) Supplementary material for this article is available at https://science.org/doi/10.1126/
sciadv.abi8605
Following the original formulation of Lu et al. (35), we represent the
solution map G by an unstacked DeepONet G, where  denotes all
trainable parameters of the DeepONet network. As illustrated in REFERENCES AND NOTES
1. R. Courant, D. Hilbert, Methods of Mathematical Physics: Partial Differential Equations
Fig. 1, the unstacked DeepONet is composed of two separate neural (John Wiley & Sons, 2008).
networks referred to as the branch and trunk networks, respectively. 2. T. J. Hughes, The Finite Element Method: Linear Static and Dynamic Finite Element Analysis
The branch network takes u as input and returns a features embedding (Courier Corporation, 2012).

Wang et al., Sci. Adv. 2021; 7 : eabi8605 29 September 2021 7 of 9


SCIENCE ADVANCES | RESEARCH ARTICLE

3. D. J. Lucia, P. S. Beran, W. A. Silva, Reduced-order modeling: New approaches 32. Z. Li, N. Kovachki, K. Azizzadenesheli, B. Liu, K. Bhattacharya, A. Stuart, A. Anandkumar,
for computational physics. Prog. Aerosp. Sci. 40, 51–117 (2004). Neural operator: Graph kernel network for partial differential equations.
4. J. N. Kutz, S. L. Brunton, B. W. Brunton, J. L. Proctor, Dynamic Mode Decomposition: arXiv:2003.03485 (2020).
Data-Driven Modeling of Complex Systems (SIAM, 2016). 33. Z. Li, N. Kovachki, K. Azizzadenesheli, B. Liu, K. Bhattacharya, A. Stuart, A. Anandkumar,
5. P. Benner, M. Ohlberger, A. Patera, G. Rozza, K. Urban, Model Reduction of Parametrized Multipole graph neural operator for parametric partial differential equations.
Systems (Springer, 2017). arXiv:2006.09535 (2020).
6. W. H. Schilders, H. A. Van der Vorst, J. Rommes, in Model Order Reduction: Theory, Research 34. Z. Li, N. Kovachki, K. Azizzadenesheli, B. Liu, K. Bhattacharya, A. Stuart, A. Anandkumar,
Aspects and Applications (Springer, 2008), vol. 13. Fourier neural operator for parametric partial differential equations. arXiv:2010.08895
7. A. Quarteroni, G. Rozza, in Reduced Order Methods for Modeling and Computational (2020).
Reduction (Springer, 2014), vol. 9. 35. L. Lu, P. Jin, G. Pang, Z. Zhang, G. E. Karniadakis, Learning nonlinear operators via
8. I. Mezić, Spectral properties of dynamical systems, model reduction and decompositions. DeepONet based on the universal approximation theorem of operators. Nat. Mach. Intell.
Nonlinear Dyn. 41, 309–325 (2005). 3, 218–229 (2021).
9. B. Peherstorfer, K. Willcox, Data-driven operator inference for nonintrusive projection- 36. T. Chen, H. Chen, Universal approximation to nonlinear operators by neural networks
based model reduction. Comput. Methods Appl. Mech. Eng. 306, 196–215 (2016). with arbitrary activation functions and its application to dynamical systems. IEEE Trans.
10. A. J. Majda, D. Qi, Strategies for reduced-order models for predicting the statistical Neural Netw. 6, 911–917 (1995).
responses and uncertainty quantification in complex turbulent dynamical systems. SIAM 37. A. D. Back, T. Chen, Universal approximation of multiple nonlinear operators by neural
Rev. 60, 491–549 (2018). networks. Neural Comput. 14, 2561–2566 (2002).
11. T. Lassila, A. Manzoni, A. Quarteroni, G. Rozza, Model order reduction in fluid dynamics: 38. H. Kadri, E. Duflos, P. Preux, S. Canu, A. Rakotomamonjy, J. Audiffren, Operator-valued
Challenges and perspectives, in Reduced Order Methods for Modeling and Computational kernels for learning from functional response data. J. Mach. Learn. Res. 17, 1–54 (2016).
Reduction (Springer, 2014), pp. 235–273. 39. M. Griebel, C. Rieger, Reproducing kernel hilbert spaces for parametric partial differential
12. D. C. Psichogios, L. H. Ungar, A hybrid neural network-first principles approach to process equations. SIAM-ASA J. Uncertain. Quantif. 5, 111–137 (2017).
modeling. AIChE J. 38, 1499–1511 (1992). 40. H. Owhadi, Do ideas have shape? plato’s theory of forms as the continuous limit of
13. I. E. Lagaris, A. Likas, D. I. Fotiadis, Artificial neural networks for solving ordinary artificial neural networks. arXiv:2008.03920 (2020).
and partial differential equations. IEEE Trans. Neural Netw. 9, 987–1000 (1998). 41. N. H. Nelsen, A. M. Stuart, The random feature model for input-output maps between

Downloaded from https://www.science.org on December 25, 2023


14. M. Raissi, P. Perdikaris, G. E. Karniadakis, Physics-informed neural networks: A deep banach spaces. arXiv:2005.10224 (2020).
learning framework for solving forward and inverse problems involving nonlinear partial 42. C. Schwab, J. Zech, Deep learning in high dimension: Neural network expression rates
differential equations. J. Comput. Phys. 378, 686–707 (2019). for generalized polynomial chaos expansions in uq. Anal. Appl. 17, 19–55 (2019).
15. L. Sun, H. Gao, S. Pan, J.-X. Wang, Surrogate modeling for fluid flows based on physics- 43. S. Wojtowytsch, W. E, Can shallow neural networks beat the curse of dimensionality?
constrained deep learning without simulation data. Comput. Methods Appl. Mech. Eng. A mean field training perspective. IEEE Trans. Artif. Intell. 1, 121–129 (2021).
361, 112732 (2020). 44. S. Lanthaler, S. Mishra, G. E. Karniadakis, Error estimates for deeponets: A deep learning
16. Y. Zhu, N. Zabaras, P.-S. Koutsourelakis, P. Perdikaris, Physics-constrained deep learning framework in infinite dimensions. arXiv:2102.09618 (2021).
for high-dimensional surrogate modeling and uncertainty quantification without labeled 45. S. Cai, Z. Wang, L. Lu, T. A. Zaki, G. E. Karniadakis, DeepM&Mnet: Inferring the
data. J. Comput. Phys. 394, 56–81 (2019). electroconvection multiphysics fields based on operator approximation by neural
17. S. Karumuri, R. Tripathy, I. Bilionis, J. Panchal, Simulator-free solution of high-dimensional networks. arXiv:2009.12935 (2020).
stochastic elliptic partial differential equations using deep neural networks. J. Comput. 46. C. Lin, Z. Li, L. Lu, S. Cai, M. Maxey, G. E. Karniadakis, Operator learning for predicting
Phys. 404, 109120 (2020). multiscale bubble growth dynamics. arXiv:2012.12816 (2020).
18. J. Sirignano, K. Spiliopoulos, DGM: A deep learning algorithm for solving partial 47. B. Liu, N. Kovachki, Z. Li, K. Azizzadenesheli, A. Anandkumar, A. Stuart, K. Bhattacharya, A
differential equations. J. Comput. Phys. 375, 1339–1364 (2018). learning-based multiscale method and its application to inelastic impact problems.
19. M. Raissi, A. Yazdani, G. E. Karniadakis, Hidden fluid mechanics: Learning velocity arXiv:2102.07256 (2021).
and pressure fields from flow visualizations. Science 367, 1026–1030 (2020). 48. P. C. Di Leoni, L. Lu, C. Meneveau, G. Karniadakis, T. A. Zaki, Deeponet prediction of linear
20. A. Tartakovsky, C. O. Marrero, P. Perdikaris, G. Tartakovsky, D. Barajas-Solano, instability waves in high-speed boundary layers. arXiv:2105.08697 (2021).
Physics-informed deep neural networks for learning parameters and constitutive 49. Z. Mao, L. Lu, O. Marxen, T. A. Zaki, G. E. Karniadakis, DeepM&Mnet for hypersonics:
relationships in subsurface flow problems. Water Resour. Res. 56, e2019WR026731 (2020). Predicting the coupled flow and finite-rate chemistry behind a normal shock using
21. O. Hennigh, S. Narasimhan, M. A. Nabian, A. Subramaniam, K. Tangsali, M. Rietmann, neural-network approximation of operators. arXiv:2011.03349 (2020).
J. del Aguila Ferrandis, W. Byeon, Z. Fang, S. Choudhry, NVIDIA SimNet: An ai-accelerated 50. Y. Khoo, J. Lu, L. Ying, Solving parametric PDE problems with artificial neural networks.
multi-physics simulation framework. arXiv:2012.07938 (2020). arXiv:1707.03351 (2017).
22. S. Cai, Z. Wang, S. Wang, P. Perdikaris, G. Karniadakis, Physics-informed neural networks 51. N. Geneva, N. Zabaras, Modeling the dynamics of PDE systems with physics-constrained
(pinns) for heat transfer problems. J. Heat Transfer 143, 060801 (2021). deep auto-regressive networks. J. Comput. Phys. 403, 109056 (2020).
23. G. Kissas, Y. Yang, E. Hwuang, W. R. Witschey, J. A. Detre, P. Perdikaris, Machine learning 52. Y. Chen, B. Dong, J. Xu, Meta-mgnet: Meta multigrid networks for solving parameterized
in cardiovascular flows modeling: Predicting arterial blood pressure from non-invasive partial differential equations. arXiv:2010.14088 (2020).
4D flow MRI data using physics-informed neural networks. Comput. Methods Appl. Mech. Eng. 53. D. Kochkov, J. A. Smith, A. Alieva, Q. Wang, M. P. Brenner, S. Hoyer, Machine learning
358, 112623 (2020). accelerated computational fluid dynamics. arXiv:2102.01010 (2021).
24. F. Sahli Costabal, Y. Yang, P. Perdikaris, D. E. Hurtado, E. Kuhl, Physics-informed neural 54. A. Griewank, On automatic differentiation, in Mathematical Programming: Recent
networks for cardiac activation mapping. Front. Phys. 8, 42 (2020). Developments and Applications (Kluwer Academic Publishers, 1989), pp. 83–108.
25. L. Lu, M. Dao, P. Kumar, U. Ramamurty, G. E. Karniadakis, S. Suresh, Extraction 55. A. G. Baydin, B. A. Pearlmutter, A. A. Radul, J. M. Siskind, Automatic differentiation
of mechanical properties of materials through deep learning from instrumented in machine learning: A survey. J. Mach. Learn. Res. 18, 1–43 (2018).
indentation. Proc. Natl. Acad. Sci. U.S.A. 117, 7052–7062 (2020). 56. M. Tancik, P. P. Srinivasan, B. Mildenhall, S. Fridovich-Keil, N. Raghavan, U. Singhal,
26. Y. Chen, L. Lu, G. E. Karniadakis, L. Dal Negro, Physics-informed neural networks for R. Ramamoorthi, J. T. Barron, R. Ng, Fourier features let networks learn high frequency
inverse problems in nano-optics and metamaterials. Opt. Express 28, 11618–11633 (2020). functions in low dimensional domains. arXiv:2006.10739 (2020).
27. S. Goswami, C. Anitescu, S. Chakraborty, T. Rabczuk, Transfer learning enhanced physics 57. M. Raissi, H. Babaee, P. Givi, Deep learning of turbulent scalar mixing. Phys. Rev. Fluids 4,
informed neural network for phase-field modeling of fracture. Theor. Appl. Fract. Mech. 124501 (2019).
106, 102447 (2020). 58. T. A. Driscoll, N. Hale, L. N. Trefethen, Chebfun guide (2014).
28. D. Z. Huang, K. Xu, C. Farhat, E. Darve, Learning constitutive relations from indirect 59. J. J. Park, P. Florence, J. Straub, R. Newcombe, S. Lovegrove, DeepSDF: Learning
observations using deep neural networks. J. Comput. Phys. 416, 109491 (2020). Continuous Signed Distance Functions for Shape Representation, Proceedings of the IEEE/
29. D. Elbrächter, P. Grohs, A. Jentzen, C. Schwab, Dnn expression rate analysis of CVF Conference on Computer Vision and Pattern Recognition (2019), pp. 165–174.
high-dimensional PDEs: Application to option pricing. arXiv:1809.07669 (2018). 60. M. S. Selig, Uiuc airfoil data site (1996).
30. J. Han, A. Jentzen, W. E, Solving high-dimensional partial differential equations using 61. S. Wang, H. Wang, P. Perdikaris, On the eigenvector bias of fourier feature networks:
deep learning. Proc. Natl. Acad. Sci. U.S.A. 115, 8505–8510 (2018). From regression to solving multi-scale pdes with physics-informed neural networks.
31. T. Poggio, H. Mhaskar, L. Rosasco, B. Miranda, Q. Liao, Why and when can deep-but not arXiv:2012.10047 (2020).
shallow-networks avoid the curse of dimensionality: A review. Int. J. Autom. Comput. 14, 62. S. Wang, Y. Teng, P. Perdikaris, Understanding and mitigating gradient pathologies in
503–519 (2017). physics-informed neural networks. arXiv:2001.04536 (2020).

Wang et al., Sci. Adv. 2021; 7 : eabi8605 29 September 2021 8 of 9


SCIENCE ADVANCES | RESEARCH ARTICLE

63. S. Wang, X. Yu, P. Perdikaris, When and why PINNs fail to train: A neural tangent kernel 73. S. Wang, P. Perdikaris, Long-time integration of parametric evolution equations with
perspective. arXiv:2007.14527 (2020). physics-informed deeponets. arXiv:2106.05384 (2021).
64. L. McClenny, U. Braga-Neto, Self-adaptive physics-informed neural networks using a soft 74. S. M. Cox, P. C. Matthews, Exponential time differencing for stiff systems. J. Comput. Phys.
attention mechanism. arXiv:2009.04544 (2020). 176, 430–455 (2002).
65. J. Bradbury, R. Frostig, P. Hawkins, M. J. Johnson, C. Leary, D. Maclaurin, G. Necula,
A. Paszke, J. VanderPlas, S. Wanderman-Milne, Q. Zhang, JAX: Composable Acknowledgments: We thank the developers of the software that enabled our research,
transformations of Python+NumPy programs (2018). including JAX (65), Matplotlib (66), and NumPy (67). Funding: This work received support from
66. J. D. Hunter, Matplotlib: A 2D graphics environment. IEEE Ann. Hist. Comput. 9, 90–95 DOE grant DE-SC0019116, AFOSR grant FA9550-20-1-0060, and DOE-ARPA grant DE-
(2007). AR0001201. Author contributions: S.W. and P.P. conceptualized the research and designed
67. C. R. Harris, K. J. Millman, S. J. van der Walt, R. Gommers, P. Virtanen, D. Cournapeau, the numerical studies. S.W. and H.W. implemented the methods and conducted the numerical
E. Wieser, J. Taylor, S. Berg, N. J. Smith, R. Kern, M. Picus, S. Hoyer, M. H. van Kerkwijk, experiments. P.P. provided funding and supervised all aspects of this work. All authors contributed
M. Brett, A. Haldane, J. F. del Río, M. Wiebe, P. Peterson, P. Gérard-Marchant, K. Sheppard, in writing the manuscript. Competing interests: The authors declare that they have no
T. Reddy, W. Weckesser, H. Abbasi, C. Gohlke, T. E. Oliphant, Array programming competing interests. Data and materials availability: All data needed to evaluate the conclusions
with numpy. Nature 585, 357–362 (2020). in the paper are present in the paper and/or the Supplementary Materials. All code and data
68. C. Rasmussen, C. Williams, Gaussian Processes for Machine Learning, Adaptive accompanying this manuscript are publicly available at https://doi.org/10.5281/zenodo.5206676
Computation and Machine Learning (MIT Press, 2006). and https://github.com/PredictiveIntelligenceLab/Physics-informed-DeepONets.
69. D. P. Kingma, J. Ba, Adam: A method for stochastic optimization. arXiv:1412.6980 (2014).
70. C. Finn, P. Abbeel, S. Levine, International Conference on Machine Learning (PMLR, 2017), Submitted 5 April 2021
pp. 1126–1135. Accepted 3 August 2021
71. A. Iserles, in A First Course in the Numerical Analysis of Differential Equations (Cambridge Published 29 September 2021
Univ. Press, 2009), no. 44. 10.1126/sciadv.abi8605
72. E. Haghighat, M. Raissi, A. Moure, H. Gomez, R. Juanes, A physics-informed deep learning
framework for inversion and surrogate modeling in solid mechanics. Comput. Methods Citation: S. Wang, H. Wang, P. Perdikaris, Learning the solution operator of parametric partial
Appl. Mech. Eng. 379, 113741 (2021). differential equations with physics-informed DeepONets. Sci. Adv. 7, eabi8605 (2021).

Downloaded from https://www.science.org on December 25, 2023

Wang et al., Sci. Adv. 2021; 7 : eabi8605 29 September 2021 9 of 9


Learning the solution operator of parametric partial differential equations with
physics-informed DeepONets
Sifan Wang, Hanwen Wang, and Paris Perdikaris

Sci. Adv. 7 (40), eabi8605. DOI: 10.1126/sciadv.abi8605

View the article online

Downloaded from https://www.science.org on December 25, 2023


https://www.science.org/doi/10.1126/sciadv.abi8605
Permissions
https://www.science.org/help/reprints-and-permissions

Use of this article is subject to the Terms of service

Science Advances (ISSN 2375-2548) is published by the American Association for the Advancement of Science. 1200 New York Avenue
NW, Washington, DC 20005. The title Science Advances is a registered trademark of AAAS.

Copyright © 2021 The Authors, some rights reserved; exclusive licensee American Association for the Advancement of Science. No claim
to original U.S. Government Works. Distributed under a Creative Commons Attribution License 4.0 (CC BY).

You might also like