Skip to content

siravan/SymJit.jl

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

34 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

SymJit.jl

SymJit.jl is a lightweight just-in-time (JIT) compiler that directly translates Julia Symbolics expressions into machine code without using a separate library such as LLVM. Its main utility is to generate fast numerical functions to feed into various numerical solvers provided by the Julia SciML ecosystem.

SymJit.jl is a Julia wrapper for symjiy, which provides JIT capability to Python sympy ecosystem. Currently, SymJit.jl and symjit share the same underlying compiler library written in Rust. However, its companion package JitEngine.jl is a Julia-only JIT package.

The Rust backend generates AVX-compatible code by default for x86-64/AMD64 processors but can downgrade to SSE2 instructions if the processor does not support AVX or if explicitly requested by passing ty='amd-sse' to compile functions (see below). SSE2 instructions were introduced in 2000, meaning that virtually all current 64-bit x86-64 processors support them. Intel introduced the AVX instruction set in 2011; therefore, most processors support it. On ARM64 processors, both the backend generate code for the aarch64 instruction set. ARM32 and IA32 are not supported, but Risc V is in development.

Installation

SymJit.jl can be installed as

using Pkg
Pkg.install("https://github.com/siravan/SymJit.jl")

Its main depenendencies are the Julia Computer Algebra System (CAS) infrastrusture, i.e., SymbolicUtils and Symbolics, plus the higher-level modeling language ModelingToolkit that provides the link to various solvers.

Tutorial

The main interface of SymJit.jl are three compile functions: compile_func, compile_ode, and compile_jac.

compile_func is a general-purpose compilation functions that accepts various forms of input, such as symbolic expressions, ODESystem from ModelingToolkit, and even certain pure Julia functions. On the other hand, compile_ode, and compile_jac are specialized functions to provide fast compiled functions to pass to DifferentialEquations ODE solvers.

Using compile_func

The primary action of compile_func is to compile a list of symbolic expressions. In this form, its signature is compile_func(states, obs; params=[], kw...), where states is a list of symbolic expressins, obs is a list of symbolic expressions, and the optional params is a list of symbolic variables. Other optional arguments will be discussed below. The output of compile_func is a callable object, i.e., an object that behaves like a function.

using SymJit
using Symbolics

@variables x y
f = compile_func([x, y], [x+y, x*y, x^3])
@assert f([3, 2]) == [5.0, 6.0, 27.0]

Here, f accepts an array of size 2 (= length(states)) as input and returns an array of size 3 (= length(obs)) of doubles.

compile_func also does automatic vectorization. Moerover, it can optimize the vectorized code by generating SIMD codes when available (e.g., x86-64 supporting AVX or higher) and applying balanced multi-threading using Rayon rust crate. Vectorization is activated by passing a 2D array of size num_of_samples x num_of_states to f:

X = rand(1000, 2)
@assert f(X)[:, 1] == X[:, 1] .+ X[:, 2]
@assert f(X)[:, 2] == X[:, 1] .* X[:, 2]
@assert f(X)[:, 3] == X[:, 1] .^ 3

We can also pass a Julia function to compile_func. The Julia function should accept one or more scalar double values and returns a double result. compile_func converts the function to an equivalent symbolic form; therefore, the function is restricted to a subset of possible Julia functions. Specially, it can only have constant loops and conditions should be through IfElse.ifelse.

h(x, y, z) = x + y * z
f = compile_func(h)
@assert f(3, 4, 5) == 23.0

Note that in this situation f accepts separate values as input (instead of an array) and returns a single scalar double value. We can also pass an anonumous function as f = compile_func((x, y, z) -> x + y * z).

compile_func also accepts a list of parameters (optional argument params). See below for examples.

Complex Numbers

As of version 0.4.0, SymJit.jl supports complex numbers by passing dtype=:complex to compile functions.

using SymJit
using Symbolics

@variables x y
f = compile_func([x, y], [x+y, x*y, x^3]; dtype=:complex)
@assert f([4+5im, 5+1im]) == [9.0 + 6.0im, 15.0 + 29.0im, -236.0 + 115.0im]

Using compile_ode

The main utility of SymJit.jl is to prepare compiled functions for SciML functions, especially ODE solvers. These routines expect input of form f!(du, u, p, t) and jac!(J, u, p, t), which are generated by compile_ode and compile_jac, respectively.

compile_ode has three main forms. The basic usage has a signature of compile_ode(t, states, eqs; params, kw...) and returns a callable with signature of f(du, u, p, t):

using SymJit
using Symbolics
using DifferentialEquations
using Plots

@variables x y t
f = compile_ode(t, [x, y], [y, -x * t])
prob = ODEProblem(f, [0.0, 1.0], (0.0, 20.0), [])
sol = solve(prob)
plot(sol)

The output are Airy functions, as it should:

airy

We can also define and pass parameters to compile_ode. Let's solve Lorenz equation:

@variables x y z
@variables σ ρ β    # @parameters if ModelingToolkit is used

f = compile_ode(t, [x, y, z], [σ * (y - x), x *- z) - y, x * y - β * z]; params = [σ, ρ, β])
prob = ODEProblem(f, [1.0, 1.0, 1.0], (0.0, 100.0), (10.0, 28.0, 8 / 3))
sol = solve(prob)
plot(sol)

The output is the strange attractor:

airy

Instead of passing symblic expressions, we can pass a Julia function:

function lorenz(du, u, p, t)
    x, y, z = u
    σ, ρ, β = p

    du[1] = σ * (y - x)
    du[2] = x *- z) - y
    du[3] = x * y - β * z
end

f = compile_ode(lorenz)
prob = ODEProblem(f, [1.0, 1.0, 1.0], (0.0, 100.0), (10.0, 28.0, 8 / 3))
sol = solve(prob)
plot(sol; idxs=(1, 3))

The third way to use compile_ode is to pass an ODESystem or System (ModelingToolkit v10) object:

using SymJit
using Symbolics
using ModelingToolkit
using DifferentialEquations
using Plots

@independent_variables t
@variables x(t) y(t) z(t)
@parameters σ ρ β

D = Differential(t)

eqs = [ D(x) ~ σ * (y - x),
    D(y) ~ x *- z) - y,
    D(z) ~ x * y - β * z]

@mtkcompile sys = System(eqs, t)

f = compile_ode(sys)
prob = ODEProblem(f, [1.0, 1.0, 1.0], (0.0, 100.0), [28.0, 8 / 3, 10.0])
sol = solve(prob)
plot(sol; idxs=(3, 1))

Using compile_jac

compile_jac is a companion of compile_ode to calculate Jacobian to accelerate ODE solvers. It has exactly the same signature as compile_ode and returns a callable with signature jac!(J, u, p, t).

using SymJit
using Symbolics
using DifferentialEquations
using Plots

function lorenz(du, u, p, t)
    x, y, z = u
    σ, ρ, β = p

    du[1] = σ * (y - x)
    du[2] = x *- z) - y
    du[3] = x * y - β * z
end

f = compile_ode(lorenz)
jac = compile_jac(lorenz)

ff = ODEFunction(f; jac)

prob = ODEProblem(ff, [1.0, 1.0, 1.0], (0.0, 100.0), (10.0, 28.0, 8 / 3))
sol = solve(prob)
plot(sol; idxs=(1, 3))

As mentioned above, when we pass a Julia function to any of the compile functions, SymJit.jl reverse-engineers the symbolic equivalent of the function. compile_jac does symbolic differentiation using the machinery of Julia Symbolics and then compiles the result.

Keywords

The compile functions accept optional keywords that control the compilation process. The keywords are:

  • use_simd (boolean, default true): generates SIMD instructions if possible (currently supports AVX instructions on x86-64 processors). SIMD code should improve the vectorized performance up to 4x for certain tasks (using 256-bit registers that encode and operate on four doubles simultaneously).
  • use_threads (boolean, default true): use multi-threading to speed up parallel processing.
  • cse (boolean, default true): performs common-subexpression elimination, i.e., factoring common expressions and sub-expressions.
  • fastmath (boolean, default false): rewrites the code to combine multiplication and addition/substraction into various fused-multiply-add instructions.
  • opt_level (0, 1, or 2, default 1): the optimization level, akin to -O0, -O1, -O2 options to gcc.
  • ty (string, default native): defines the type of the code to generate. Possible values are:
    • amd: generates 64-bit AMD64/x86-64 code. If the processor supports AVX, then this is equivalent to passing amd-avx; otherwise, it is equal to amd-sse.
    • amd-avx: generates 64-bit AMD64/x86-64 AVX code.
    • amd-sse: generates 64-bit AMD64/x86-64 SSE code. It requires a minimum SSE2.1 specification, which should be easily fulfilled by all except the most ancient processors.
    • arm generates 64-bit ARM64/aarch64 code. This option is mainly tested on Apple Silicon.
    • bytecode: this option uses a generic and simple bytecode evaluator as a fallback option in case of unsupported instruction sets. The utility is to test correctness (see option debug below), not speed (note that Julia code does cannot directly use bytecode).
    • native (default): selects the correct instruction set based on the current processor.
    • debug: is useful for debugging the generated code. It runs both native and bytecode versions, compares the results with a tolerance, and panics if they are different.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages