|
| 1 | +# Complex Equations with PINNs |
| 2 | + |
| 3 | +NeuralPDE supports training PINNs with complex differential equations. This example will demonstrate how to use it for [`NNODE`](@ref). Let us consider a system of [bloch equations](https://en.wikipedia.org/wiki/Bloch_equations). Note [`QuadratureTraining`](@ref) cannot be used with complex equations due to current limitations of computing quadratures. |
| 4 | + |
| 5 | +As the input to this neural network is time which is real, we need to initialize the parameters of the neural network with complex values for it to output and train with complex values. |
| 6 | + |
| 7 | +```@example complex |
| 8 | +using Random, NeuralPDE |
| 9 | +using OrdinaryDiffEq |
| 10 | +using Lux, OptimizationOptimisers |
| 11 | +using Plots |
| 12 | +rng = Random.default_rng() |
| 13 | +Random.seed!(100) |
| 14 | +
|
| 15 | +function bloch_equations(u, p, t) |
| 16 | + Ω, Δ, Γ = p |
| 17 | + γ = Γ / 2 |
| 18 | + ρ₁₁, ρ₂₂, ρ₁₂, ρ₂₁ = u |
| 19 | + d̢ρ = [im * Ω * (ρ₁₂ - ρ₂₁) + Γ * ρ₂₂; |
| 20 | + -im * Ω * (ρ₁₂ - ρ₂₁) - Γ * ρ₂₂; |
| 21 | + -(γ + im * Δ) * ρ₁₂ - im * Ω * (ρ₂₂ - ρ₁₁); |
| 22 | + conj(-(γ + im * Δ) * ρ₁₂ - im * Ω * (ρ₂₂ - ρ₁₁))] |
| 23 | + return d̢ρ |
| 24 | +end |
| 25 | +
|
| 26 | +u0 = zeros(ComplexF64, 4) |
| 27 | +u0[1] = 1.0 |
| 28 | +time_span = (0.0, 2.0) |
| 29 | +parameters = [2.0, 0.0, 1.0] |
| 30 | +
|
| 31 | +problem = ODEProblem(bloch_equations, u0, time_span, parameters) |
| 32 | +
|
| 33 | +chain = Lux.Chain( |
| 34 | + Lux.Dense(1, 16, tanh; init_weight = (rng, a...) -> Lux.kaiming_normal(rng, ComplexF64, a...)) , |
| 35 | + Lux.Dense(16, 4; init_weight = (rng, a...) -> Lux.kaiming_normal(rng, ComplexF64, a...)) |
| 36 | + ) |
| 37 | +ps, st = Lux.setup(rng, chain) |
| 38 | +
|
| 39 | +opt = OptimizationOptimisers.Adam(0.01) |
| 40 | +ground_truth = solve(problem, Tsit5(), saveat = 0.01) |
| 41 | +alg = NNODE(chain, opt, ps; strategy = StochasticTraining(500)) |
| 42 | +sol = solve(problem, alg, verbose = false, maxiters = 5000, saveat = 0.01) |
| 43 | +``` |
| 44 | + |
| 45 | +Now, lets plot the predictions. |
| 46 | + |
| 47 | +`u1`: |
| 48 | + |
| 49 | +```@example complex |
| 50 | +plot(sol.t, real.(reduce(hcat, sol.u)[1, :])); |
| 51 | +plot!(ground_truth.t, real.(reduce(hcat, ground_truth.u)[1, :])) |
| 52 | +``` |
| 53 | + |
| 54 | +```@example complex |
| 55 | +plot(sol.t, imag.(reduce(hcat, sol.u)[1, :])); |
| 56 | +plot!(ground_truth.t, imag.(reduce(hcat, ground_truth.u)[1, :])) |
| 57 | +``` |
| 58 | + |
| 59 | +`u2`: |
| 60 | + |
| 61 | +```@example complex |
| 62 | +plot(sol.t, real.(reduce(hcat, sol.u)[2, :])); |
| 63 | +plot!(ground_truth.t, real.(reduce(hcat, ground_truth.u)[2, :])) |
| 64 | +``` |
| 65 | + |
| 66 | +```@example complex |
| 67 | +plot(sol.t, imag.(reduce(hcat, sol.u)[2, :])); |
| 68 | +plot!(ground_truth.t, imag.(reduce(hcat, ground_truth.u)[2, :])) |
| 69 | +``` |
| 70 | + |
| 71 | +`u3`: |
| 72 | + |
| 73 | +```@example complex |
| 74 | +plot(sol.t, real.(reduce(hcat, sol.u)[3, :])); |
| 75 | +plot!(ground_truth.t, real.(reduce(hcat, ground_truth.u)[3, :])) |
| 76 | +``` |
| 77 | + |
| 78 | +```@example complex |
| 79 | +plot(sol.t, imag.(reduce(hcat, sol.u)[3, :])); |
| 80 | +plot!(ground_truth.t, imag.(reduce(hcat, ground_truth.u)[3, :])) |
| 81 | +``` |
| 82 | + |
| 83 | +`u4`: |
| 84 | + |
| 85 | +```@example complex |
| 86 | +plot(sol.t, real.(reduce(hcat, sol.u)[4, :])); |
| 87 | +plot!(ground_truth.t, real.(reduce(hcat, ground_truth.u)[4, :])) |
| 88 | +``` |
| 89 | + |
| 90 | +```@example complex |
| 91 | +plot(sol.t, imag.(reduce(hcat, sol.u)[4, :])); |
| 92 | +plot!(ground_truth.t, imag.(reduce(hcat, ground_truth.u)[4, :])) |
| 93 | +``` |
| 94 | + |
| 95 | +We can see it is able to learn the real parts of `u1`, `u2` and imaginary parts of `u3`, `u4`. |
0 commit comments