This directory contains interactive examples that can serve as a step-by-step tutorial showcasing SciML capabilities for PDEs in Neuromancer.
Physics-informed neural networks (PINNs) examples:
Part 1: Diffusion Equation
Part 2: Burgers' Equation
Part 3: Burgers' Equation w/ Parameter Estimation (Inverse Problem)
Part 4: Laplace's Equation (steady-state)
Part 5: Damped Pendulum (stacked PINN)
Part 6: Navier-Stokes equation (lid-driven cavity flow, steady-state, KAN)
Physics-informed neural networks (PINNs) is a method for approximate solutions to differential equations. By exploiting prior knowledge in the form of PDE equations, PINNs overcome the low data availability of some systems. The prior knowledge of PDE physical laws is used in the training of neural networks as a regularization that limits the space of admissible solutions of the function approximation.
The neural network (NN) acts as a function approximator, mapping the spatio-temporal
coordinates
The NN approximation must satisfy the PDE equations
The training dataset contains collocation points (CP)
of the spatio-temporal domain
The PINNs loss function is composed of multiple terms.
PDE Collocation Points Loss:
The PINN
If
PDE Initial and Boundary Conditions Loss:
We select
Bound the PINN output in the PDE solution domain:
Sometimes we expect the outputs of the neural net
to be bounded in the PDE solution domain
Total Loss:
Then the total loss is just a sum of PDE residuals over CP
and supervised learning residuals over IC and BC.
Stacked physics-informed neural networks represent a novel approach to enhance the training efficiency and accuracy of models dealing with partial differential equations (PDEs). This method leverages a hierarchy of multifidelity networks, where each layer of the stack uses the output of the previous one as an input, leading to a progressive refinement of the solution [4,5]. This architecture allows for a more gradual learning process, helping overcome the difficulties typically associated with training deep networks on complex PDEs, such as those subject to fixed points and multiple local minima. Stacked PINNs are available on Neuromancer via 'blocks.MultiFidelityMLP'.
Based on the Kolmogorov-Arnold representation theorem, KANs offer an alternative architecture: where traditional neural networks utilize fixed activation functions, KANs employ learnable activation functions on the edges of the network, replacing linear weight parameters with parametrized spline functions. This fundamental shift sometimes enhances model interpretability and improves computational efficiency and accuracy [6]. KANs are available on Neuromancer via blocks.KANBlock, which leverages the efficient-kan implementation of [7].
[6] Liu, Ziming, et al. (2024). KAN: Kolmogorov-Arnold Networks.
