From 3bbf4157e664d338c330abc0b1739bf6edaf0fd8 Mon Sep 17 00:00:00 2001 From: "Documenter.jl" Date: Thu, 6 Feb 2025 18:46:44 +0000 Subject: [PATCH] build based on 9a626d0 --- dev/.documenter-siteinfo.json | 2 +- dev/demos/benchmarking/index.html | 4 +- .../damped_SHO/{17e57e1c.svg => 25783463.svg} | 442 +++--- dev/demos/damped_SHO/index.html | 2 +- .../{f03dd3cf.svg => 1a01dc16.svg} | 1222 ++++++++--------- .../{bb7481c4.svg => cd21817d.svg} | 64 +- .../{17eb13fa.svg => dd836b07.svg} | 76 +- dev/demos/policy_search/index.html | 8 +- .../{dda1b6a6.svg => ff0f207c.svg} | 172 +-- dev/demos/roa_estimation/index.html | 2 +- dev/index.html | 2 +- dev/man/benchmarking/index.html | 2 +- dev/man/decrease/index.html | 2 +- dev/man/index.html | 2 +- dev/man/internals/index.html | 2 +- dev/man/local_lyapunov/index.html | 2 +- dev/man/minimization/index.html | 2 +- dev/man/pdesystem/index.html | 4 +- dev/man/policy_search/index.html | 2 +- dev/man/roa/index.html | 4 +- dev/man/structure/index.html | 2 +- 21 files changed, 1010 insertions(+), 1010 deletions(-) rename dev/demos/damped_SHO/{17e57e1c.svg => 25783463.svg} (96%) rename dev/demos/policy_search/{f03dd3cf.svg => 1a01dc16.svg} (97%) rename dev/demos/policy_search/{bb7481c4.svg => cd21817d.svg} (92%) rename dev/demos/policy_search/{17eb13fa.svg => dd836b07.svg} (91%) rename dev/demos/roa_estimation/{dda1b6a6.svg => ff0f207c.svg} (93%) diff --git a/dev/.documenter-siteinfo.json b/dev/.documenter-siteinfo.json index 1107d3b..89d9419 100644 --- a/dev/.documenter-siteinfo.json +++ b/dev/.documenter-siteinfo.json @@ -1 +1 @@ -{"documenter":{"julia_version":"1.11.3","generation_timestamp":"2025-02-06T17:37:38","documenter_version":"1.8.0"}} \ No newline at end of file +{"documenter":{"julia_version":"1.11.3","generation_timestamp":"2025-02-06T18:46:38","documenter_version":"1.8.0"}} \ No newline at end of file diff --git a/dev/demos/benchmarking/index.html b/dev/demos/benchmarking/index.html index 7c742a2..3230be3 100644 --- a/dev/demos/benchmarking/index.html +++ b/dev/demos/benchmarking/index.html @@ -183,5 +183,5 @@ │ Prediced │ 0 │ 12 │ │ negatives │ │ │ └────────────┴───────────┴───────────┘ -
training_time
156.503663399

The returned actual labels are just endpoint_check applied to endpoints, which are the results of simulating from each element of states.

all(endpoint_check.(endpoints) .== actual)
true

Similarly, the predicted labels are the results of the neural Lyapunov classifier. In this case, we used the default classifier, which just checks for negative values of $\dot{V}$.

classifier = (V, V̇, x) -> V̇ < zero(V̇) || endpoint_check(x)
-all(classifier.(V_samples, V̇_samples, states) .== predicted)
true
+
training_time
151.6289133

The returned actual labels are just endpoint_check applied to endpoints, which are the results of simulating from each element of states.

all(endpoint_check.(endpoints) .== actual)
true

Similarly, the predicted labels are the results of the neural Lyapunov classifier. In this case, we used the default classifier, which just checks for negative values of $\dot{V}$.

classifier = (V, V̇, x) -> V̇ < zero(V̇) || endpoint_check(x)
+all(classifier.(V_samples, V̇_samples, states) .== predicted)
true
diff --git a/dev/demos/damped_SHO/17e57e1c.svg b/dev/demos/damped_SHO/25783463.svg similarity index 96% rename from dev/demos/damped_SHO/17e57e1c.svg rename to dev/demos/damped_SHO/25783463.svg index ad031a5..4f67d40 100644 --- a/dev/demos/damped_SHO/17e57e1c.svg +++ b/dev/demos/damped_SHO/25783463.svg @@ -1,247 +1,247 @@ - + - + - + - + - + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + - + - - - - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + + + + - + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + - + - - - - - - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/dev/demos/damped_SHO/index.html b/dev/demos/damped_SHO/index.html index 869a988..e98faf7 100644 --- a/dev/demos/damped_SHO/index.html +++ b/dev/demos/damped_SHO/index.html @@ -201,4 +201,4 @@ ylabel = "v", ); p2 = scatter!([0], [0], label = "Equilibrium"); -plot(p1, p2)Example block output

Each sublevel set of $V$ completely contained in the plot above has been verified as a subset of the region of attraction.

+plot(p1, p2)Example block output

Each sublevel set of $V$ completely contained in the plot above has been verified as a subset of the region of attraction.

diff --git a/dev/demos/policy_search/f03dd3cf.svg b/dev/demos/policy_search/1a01dc16.svg similarity index 97% rename from dev/demos/policy_search/f03dd3cf.svg rename to dev/demos/policy_search/1a01dc16.svg index ec4d27d..873cc73 100644 --- a/dev/demos/policy_search/f03dd3cf.svg +++ b/dev/demos/policy_search/1a01dc16.svg @@ -1,642 +1,642 @@ - + - + - + - + - + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + - + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + - + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + - + - - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + + - + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/dev/demos/policy_search/bb7481c4.svg b/dev/demos/policy_search/cd21817d.svg similarity index 92% rename from dev/demos/policy_search/bb7481c4.svg rename to dev/demos/policy_search/cd21817d.svg index 2620738..295e6b3 100644 --- a/dev/demos/policy_search/bb7481c4.svg +++ b/dev/demos/policy_search/cd21817d.svg @@ -1,46 +1,46 @@ - + - + - + - + - + - - - - - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/dev/demos/policy_search/17eb13fa.svg b/dev/demos/policy_search/dd836b07.svg similarity index 91% rename from dev/demos/policy_search/17eb13fa.svg rename to dev/demos/policy_search/dd836b07.svg index 474d00c..a68bc37 100644 --- a/dev/demos/policy_search/17eb13fa.svg +++ b/dev/demos/policy_search/dd836b07.svg @@ -1,52 +1,52 @@ - + - + - + - + - + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/dev/demos/policy_search/index.html b/dev/demos/policy_search/index.html index c6da04c..c4cda17 100644 --- a/dev/demos/policy_search/index.html +++ b/dev/demos/policy_search/index.html @@ -296,7 +296,7 @@ label = "Downward Equilibria", color = :green, markershape = :+); p3 = scatter!([-pi, pi] / pi, [0, 0], label = "Upward Equilibria", color = :red, markershape = :x, legend = false); -plot(p1, p2, p3)Example block output

Now, let's simulate the closed-loop dynamics to verify that the controller can get our system to the upward equilibrium.

First, we'll start at the downward equilibrium:

state_order = map(st -> SymbolicUtils.isterm(st) ? operation(st) : st, state_order)
+plot(p1, p2, p3)
Example block output

Now, let's simulate the closed-loop dynamics to verify that the controller can get our system to the upward equilibrium.

First, we'll start at the downward equilibrium:

state_order = map(st -> SymbolicUtils.isterm(st) ? operation(st) : st, state_order)
 state_syms = Symbol.(state_order)
 
 closed_loop_dynamics = ODEFunction(
@@ -310,7 +310,7 @@
 downward_equilibrium = zeros(2)
 ode_prob = ODEProblem(closed_loop_dynamics, downward_equilibrium, [0.0, 120.0], p)
 sol = solve(ode_prob, Tsit5())
-plot(sol)
Example block output
# ...the system should make it to the top
+plot(sol)
Example block output
# ...the system should make it to the top
 θ_end, ω_end = sol.u[end]
 x_end, y_end = sin(θ_end), -cos(θ_end)
 [x_end, y_end, ω_end] # Should be approximately [0.0, 1.0, 0.0]
3-element Vector{Float64}:
@@ -320,10 +320,10 @@
 x0 = lb .+ rand(2) .* (ub .- lb)
 ode_prob = ODEProblem(closed_loop_dynamics, x0, [0.0, 150.0], p)
 sol = solve(ode_prob, Tsit5())
-plot(sol)
Example block output
# ...the system should make it to the top
+plot(sol)
Example block output
# ...the system should make it to the top
 θ_end, ω_end = sol.u[end]
 x_end, y_end = sin(θ_end), -cos(θ_end)
 [x_end, y_end, ω_end] # Should be approximately [0.0, 1.0, 0.0]
3-element Vector{Float64}:
  -0.7263061983522099
   0.6873713015795468
- -2.470171486030198e-7
+ -2.470171486030198e-7 diff --git a/dev/demos/roa_estimation/dda1b6a6.svg b/dev/demos/roa_estimation/ff0f207c.svg similarity index 93% rename from dev/demos/roa_estimation/dda1b6a6.svg rename to dev/demos/roa_estimation/ff0f207c.svg index 29c8fd6..4ede47d 100644 --- a/dev/demos/roa_estimation/dda1b6a6.svg +++ b/dev/demos/roa_estimation/ff0f207c.svg @@ -1,118 +1,118 @@ - + - + - + - + - + - - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + + - - - - - - - - - + + + + + + + + + - - - - - + + + + + - + - - - - - - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + + + + + + - - - - - - - - - + + + + + + + + + - - - - - + + + + + diff --git a/dev/demos/roa_estimation/index.html b/dev/demos/roa_estimation/index.html index e182a71..a667753 100644 --- a/dev/demos/roa_estimation/index.html +++ b/dev/demos/roa_estimation/index.html @@ -179,4 +179,4 @@ p_V̇ = vspan!(collect(RoA); label = "Estimated Region of Attraction", color = :gray, fillstyle = :/); p_V̇ = vspan!([-1, 1]; label = "True Region of Attraction", opacity = 0.2, color = :green); -plt = plot(p_V, p_V̇)Example block output +plt = plot(p_V, p_V̇)Example block output diff --git a/dev/index.html b/dev/index.html index a5b556a..50256f9 100644 --- a/dev/index.html +++ b/dev/index.html @@ -1,2 +1,2 @@ -Home · NeuralLyapunov.jl

NeuralLyapunov.jl

NeuralLyapunov.jl is a library for searching for neural Lyapunov functions in Julia.

This package provides an API for setting up the search for a neural Lyapunov function. Such a search can be formulated as a partial differential inequality, and this library generates a ModelingToolkit.jl PDESystem to be solved using NeuralPDE.jl. Since the Lyapunov conditions can be formulated in several different ways and a neural Lyapunov function can be set up in many different forms, this library presents an extensible interface for users to choose how they wish to set up the search, with useful pre-built options for common setups.

Getting Started

If this is your first time using the library, start by familiarizing yourself with the components of a neural Lyapunov problem in NeuralLyapunov.jl. Then, you can dive in with any of the following demonstrations (the damped simple harmonic oscillator is recommended to begin):

When you begin to write your own neural Lyapunov code, especially if you hope to define your own neural Lyapunov formulation, you may find any of the following manual pages useful:

+Home · NeuralLyapunov.jl

NeuralLyapunov.jl

NeuralLyapunov.jl is a library for searching for neural Lyapunov functions in Julia.

This package provides an API for setting up the search for a neural Lyapunov function. Such a search can be formulated as a partial differential inequality, and this library generates a ModelingToolkit.jl PDESystem to be solved using NeuralPDE.jl. Since the Lyapunov conditions can be formulated in several different ways and a neural Lyapunov function can be set up in many different forms, this library presents an extensible interface for users to choose how they wish to set up the search, with useful pre-built options for common setups.

Getting Started

If this is your first time using the library, start by familiarizing yourself with the components of a neural Lyapunov problem in NeuralLyapunov.jl. Then, you can dive in with any of the following demonstrations (the damped simple harmonic oscillator is recommended to begin):

When you begin to write your own neural Lyapunov code, especially if you hope to define your own neural Lyapunov formulation, you may find any of the following manual pages useful:

diff --git a/dev/man/benchmarking/index.html b/dev/man/benchmarking/index.html index 8d5d34a..3307f0f 100644 --- a/dev/man/benchmarking/index.html +++ b/dev/man/benchmarking/index.html @@ -1,3 +1,3 @@ Benchmarking neural Lyapunov methods · NeuralLyapunov.jl

Benchmarking neural Lyapunov methods

To facilitate comparison of different neural Lyapunov specifications, optimizers, hyperparameters, etc., we provide the benchmark function.

Through its arguments, users may specify how a neural Lyapunov problem, the neural network structure, the physics-informed neural network discretization strategy, and the optimization strategy used to solve the problem. After solving the problem in the specified manner, the dynamical system is simulated (users can specify an ODE solver in the arguments, as well) and classification by the neural Lyapunov function is compared to the simulation results. By default, the benchmark function returns a confusion matrix for the resultant neural Lyapunov classifier and the training time, so that users can compare accuracy and computation speed of various methods.

NeuralLyapunov.benchmarkFunction
benchmark(dynamics, bounds, spec, chain, strategy, opt; <keyword_arguments>)
-benchmark(dynamics, lb, ub, spec, chain, strategy, opt; <keyword_arguments>)

Evaluate the specified neural Lyapunov method on the given system. Return the confusion matrix and optimization time.

Train a neural Lyapunov function as specified, then discretize the domain using a grid discretization and use the neural Lyapnov function to and the provided classifier to predict whether grid points are in the region of attraction of the provided fixed_point. Finally, simulate the system from each grid point and check if the trajectories reach the fixed point. Return a confusion matrix for the neural Lyapunov classifier using the results of the simulated trajectories as ground truth. Additionally return the time it took for the optimization to run.

To use multiple solvers, users should supply a vector of optimizers in opt. The first optimizer will be used, then the problem will be remade with the result of the first optimization as the initial guess. Then, the second optimizer will be used, and so on. Supplying a vector of Pairs in optimization_args will use the same arguments for each optimization pass, and supplying a vector of such vectors will use potentially different arguments for each optimization pass.

Positional Arguments

  • dynamics: the dynamical system being analyzed, represented as an ODESystem or the function f such that ẋ = f(x[, u], p, t); either way, the ODE should not depend on time and only t = 0.0 will be used. (For an example of when f would have a u argument, see add_policy_search.)
  • bounds: an array of domains, defining the training domain by bounding the states (and derivatives, when applicable) of dynamics; only used when dynamics isa ODESystem, otherwise use lb and ub.
  • lb and ub: the training domain will be $[lb_1, ub_1]×[lb_2, ub_2]×...$; not used when dynamics isa ODESystem, then use bounds.
  • spec: a NeuralLyapunovSpecification defining the Lyapunov function structure, as well as the minimization and decrease conditions.
  • chain: a vector of Lux/Flux chains with a d-dimensional input and a 1-dimensional output corresponding to each of the dependent variables, where d is the length of bounds or lb and ub. Note that this specification respects the order of the dependent variables as specified in the PDESystem. Flux chains will be converted to Lux internally by NeuralPDE using NeuralPDE.adapt(FromFluxAdaptor(false, false), chain).
  • strategy: determines which training strategy will be used. See the NeuralPDE Training Strategy documentation for more details.
  • opt: optimizer to use in training the neural Lyapunov function.

Keyword Arguments

  • n_grid: number or grid points in each dimension used for evaluating the neural Lyapunov classifier.
  • classifier: function of $V(x)$, $V̇(x)$, and $x$ that predicts whether $x$ is in the region of attraction; defaults to (V, V̇, x) -> V̇ < 0 || endpoint_check(x).
  • fixed_point: the equilibrium being analyzed; defaults to the origin.
  • p: the values of the parameters of the dynamical system being analyzed; defaults to SciMLBase.NullParameters(); not used when dynamics isa ODESystem, then use the default parameter values of dynamics.
  • state_syms: an array of the Symbol representing each state; not used when dynamics isa ODESystem, then the symbols from dynamics are used; if dynamics isa ODEFunction, symbols stored there are used, unless overridden here; if not provided here and cannot be inferred, [:state1, :state2, ...] will be used.
  • parameter_syms: an array of the Symbol representing each parameter; not used when dynamics isa ODESystem, then the symbols from dynamics are used; if dynamics isa ODEFunction, symbols stored there are used, unless overridden here; if not provided here and cannot be inferred, [:param1, :param2, ...] will be used.
  • policy_search::Bool: whether or not to include a loss term enforcing fixed_point to actually be a fixed point; defaults to false; only used when dynamics isa Function && !(dynamics isa ODEFunction); when dynamics isa ODEFunction, policy_search should not be supplied (as it must be false); when dynamics isa ODESystem, value inferred by the presence of unbound inputs.
  • optimization_args: arguments to be passed into the optimization solver, as a vector of Pairs. For more information, see the Optimization.jl docs.
  • simulation_time: simulation end time for checking if trajectory from a point reaches equilibrium
  • ode_solver: differential equation solver used in simulating the system for evaluation. For more information, see the DifferentialEquations.jl docs.
  • ode_solver_args: arguments to be passed into the differential equation solver. For more information, see the DifferentialEquations.jl docs.
  • endpoint_check: function of the endpoint of a simulation that returns true when the endpoint is approximately the fixed point and false otherwise; defaults to (x) -> ≈(x, fixed_point; atol=atol).
  • atol: absolute tolerance used in the default value for endpoint_check.
  • verbose: enable verbose output. Instead of outputting (confusion_matrix, training_time), output ((confusion_matrix, training_time), (states, endpoints, actual, predicted, V_samples, V̇_samples)), where states is the grid of evaluation points, endpoints is the endpoints of the simulations, actual is the result of endpoint_check applied to endpoints, predicted is the result of classifier applied to states, V_samples is $V$ evaluated at states, and V̇_samples is $V̇$ evaluated at states.
  • init_params: initial parameters for the neural network; defaults to nothing, in which case the initial parameters are generated using Lux.initialparameters and rng.
  • rng: random number generator used to generate initial parameters; defaults to a StableRNG with seed 0.
source
+benchmark(dynamics, lb, ub, spec, chain, strategy, opt; <keyword_arguments>)

Evaluate the specified neural Lyapunov method on the given system. Return the confusion matrix and optimization time.

Train a neural Lyapunov function as specified, then discretize the domain using a grid discretization and use the neural Lyapnov function to and the provided classifier to predict whether grid points are in the region of attraction of the provided fixed_point. Finally, simulate the system from each grid point and check if the trajectories reach the fixed point. Return a confusion matrix for the neural Lyapunov classifier using the results of the simulated trajectories as ground truth. Additionally return the time it took for the optimization to run.

To use multiple solvers, users should supply a vector of optimizers in opt. The first optimizer will be used, then the problem will be remade with the result of the first optimization as the initial guess. Then, the second optimizer will be used, and so on. Supplying a vector of Pairs in optimization_args will use the same arguments for each optimization pass, and supplying a vector of such vectors will use potentially different arguments for each optimization pass.

Positional Arguments

  • dynamics: the dynamical system being analyzed, represented as an ODESystem or the function f such that ẋ = f(x[, u], p, t); either way, the ODE should not depend on time and only t = 0.0 will be used. (For an example of when f would have a u argument, see add_policy_search.)
  • bounds: an array of domains, defining the training domain by bounding the states (and derivatives, when applicable) of dynamics; only used when dynamics isa ODESystem, otherwise use lb and ub.
  • lb and ub: the training domain will be $[lb_1, ub_1]×[lb_2, ub_2]×...$; not used when dynamics isa ODESystem, then use bounds.
  • spec: a NeuralLyapunovSpecification defining the Lyapunov function structure, as well as the minimization and decrease conditions.
  • chain: a vector of Lux/Flux chains with a d-dimensional input and a 1-dimensional output corresponding to each of the dependent variables, where d is the length of bounds or lb and ub. Note that this specification respects the order of the dependent variables as specified in the PDESystem. Flux chains will be converted to Lux internally by NeuralPDE using NeuralPDE.adapt(FromFluxAdaptor(false, false), chain).
  • strategy: determines which training strategy will be used. See the NeuralPDE Training Strategy documentation for more details.
  • opt: optimizer to use in training the neural Lyapunov function.

Keyword Arguments

  • n_grid: number or grid points in each dimension used for evaluating the neural Lyapunov classifier.
  • classifier: function of $V(x)$, $V̇(x)$, and $x$ that predicts whether $x$ is in the region of attraction; defaults to (V, V̇, x) -> V̇ < 0 || endpoint_check(x).
  • fixed_point: the equilibrium being analyzed; defaults to the origin.
  • p: the values of the parameters of the dynamical system being analyzed; defaults to SciMLBase.NullParameters(); not used when dynamics isa ODESystem, then use the default parameter values of dynamics.
  • state_syms: an array of the Symbol representing each state; not used when dynamics isa ODESystem, then the symbols from dynamics are used; if dynamics isa ODEFunction, symbols stored there are used, unless overridden here; if not provided here and cannot be inferred, [:state1, :state2, ...] will be used.
  • parameter_syms: an array of the Symbol representing each parameter; not used when dynamics isa ODESystem, then the symbols from dynamics are used; if dynamics isa ODEFunction, symbols stored there are used, unless overridden here; if not provided here and cannot be inferred, [:param1, :param2, ...] will be used.
  • policy_search::Bool: whether or not to include a loss term enforcing fixed_point to actually be a fixed point; defaults to false; only used when dynamics isa Function && !(dynamics isa ODEFunction); when dynamics isa ODEFunction, policy_search should not be supplied (as it must be false); when dynamics isa ODESystem, value inferred by the presence of unbound inputs.
  • optimization_args: arguments to be passed into the optimization solver, as a vector of Pairs. For more information, see the Optimization.jl docs.
  • simulation_time: simulation end time for checking if trajectory from a point reaches equilibrium
  • ode_solver: differential equation solver used in simulating the system for evaluation. For more information, see the DifferentialEquations.jl docs.
  • ode_solver_args: arguments to be passed into the differential equation solver. For more information, see the DifferentialEquations.jl docs.
  • endpoint_check: function of the endpoint of a simulation that returns true when the endpoint is approximately the fixed point and false otherwise; defaults to (x) -> ≈(x, fixed_point; atol=atol).
  • atol: absolute tolerance used in the default value for endpoint_check.
  • verbose: enable verbose output. Instead of outputting (confusion_matrix, training_time), output ((confusion_matrix, training_time), (states, endpoints, actual, predicted, V_samples, V̇_samples)), where states is the grid of evaluation points, endpoints is the endpoints of the simulations, actual is the result of endpoint_check applied to endpoints, predicted is the result of classifier applied to states, V_samples is $V$ evaluated at states, and V̇_samples is $V̇$ evaluated at states.
  • init_params: initial parameters for the neural network; defaults to nothing, in which case the initial parameters are generated using Lux.initialparameters and rng.
  • rng: random number generator used to generate initial parameters; defaults to a StableRNG with seed 0.
source diff --git a/dev/man/decrease/index.html b/dev/man/decrease/index.html index 8999dbb..cc0ef6f 100644 --- a/dev/man/decrease/index.html +++ b/dev/man/decrease/index.html @@ -1,4 +1,4 @@ Lyapunov Decrease Condition · NeuralLyapunov.jl

Lyapunov Decrease Condition

To represent the condition that the Lyapunov function $V(x)$ must decrease along system trajectories, we typically introduce a new function $\dot{V}(x) = \nabla V(x) \cdot f(x)$. This function represents the rate of change of $V$ along system trajectories. That is to say, if $x(t)$ is a trajectory defined by $\frac{dx}{dt} = f(x)$, then $\dot{V}(x(t)) = \frac{d}{dt} [ V(x(t)) ]$. It is then sufficient to show that $\dot{V}(x)$ is negative away from the fixed point and zero at the fixed point, since a negative derivative means a decreasing function.

Put mathematically, it is sufficient to require $\dot{V}(x) < 0 \, \forall x \ne x_0$ and $\dot{V}(x_0) = 0$. We call such functions negative definite (around the fixed point $x_0$). The weaker condition that $\dot{V}(x) \le 0 \, \forall x \ne x_0$ and $\dot{V}(x_0) = 0$ is negative semi-definiteness.

The condition that $\dot{V}(x_0) = 0$ is satisfied by the definition of $\dot{V}$ and the fact that $x_0$ is a fixed point, so we do not need to train for it. In cases where the dynamics have some dependence on the neural network (e.g., in policy search), we should rather train directly for $f(x_0) = 0$, since the minimization condition will also guarantee $[\nabla V](x_0) = 0$, so $\dot{V}(x_0) = 0$.

NeuralLyapunov.jl provides the LyapunovDecreaseCondition struct for users to specify the form of the decrease condition to enforce through training.

NeuralLyapunov.LyapunovDecreaseConditionType
LyapunovDecreaseCondition(check_decrease, rate_metric, strength, rectifier)

Specifies the form of the Lyapunov decrease condition to be used.

Fields

  • check_decrease::Bool: whether or not to train for negativity/nonpositivity of $V̇(x)$.
  • rate_metric::Function: should increase with $V̇(x)$; used when check_decrease == true.
  • strength::Function: specifies the level of strictness for negativity training; should be zero when the two inputs are equal and nonnegative otherwise; used when check_decrease == true.
  • rectifier::Function: positive when the input is positive and (approximately) zero when the input is negative.

Training conditions

If check_decrease == true, training will enforce:

$\texttt{rate\_metric}(V(x), V̇(x)) ≤ -\texttt{strength}(x, x_0).$

The inequality will be approximated by the equation:

$\texttt{rectifier}(\texttt{rate\_metric}(V(x), V̇(x)) + \texttt{strength}(x, x_0)) = 0.$

Note that the approximate equation and inequality are identical when $\texttt{rectifier}(t) = \max(0, t)$.

If the dynamics truly have a fixed point at $x_0$ and $V̇(x)$ is truly the rate of decrease of $V(x)$ along the dynamics, then $V̇(x_0)$ will be $0$ and there is no need to train for $V̇(x_0) = 0$.

Examples:

Asymptotic decrease can be enforced by requiring $V̇(x) ≤ -C \lVert x - x_0 \rVert^2$, for some positive $C$, which corresponds to

rate_metric = (V, dVdt) -> dVdt
 strength = (x, x0) -> C * (x - x0) ⋅ (x - x0)

This can also be accomplished with AsymptoticStability.

Exponential decrease of rate $k$ is proven by $V̇(x) ≤ - k V(x)$, which corresponds to

rate_metric = (V, dVdt) -> dVdt + k * V
-strength = (x, x0) -> 0.0

This can also be accomplished with ExponentialStability.

In either case, the rectified linear unit rectifier = (t) -> max(zero(t), t) exactly represents the inequality, but differentiable approximations of this function may be employed.

source

Pre-defined decrease conditions

NeuralLyapunov.AsymptoticStabilityFunction
AsymptoticStability(; C, strength, rectifier)

Construct a LyapunovDecreaseCondition corresponding to asymptotic stability.

The decrease condition for asymptotic stability is $V̇(x) < 0$, which is here represented as $V̇(x) ≤ - \texttt{strength}(x, x_0)$, where strength is positive definite around $x_0$. By default, $\texttt{strength}(x, x_0) = C \lVert x - x_0 \rVert^2$ for the supplied $C > 0$. $C$ defaults to 1e-6.

The inequality is represented by $\texttt{rectifier}(V̇(x) + \texttt{strength}(x, x_0)) = 0$. The default value rectifier = (t) -> max(zero(t), t) exactly represents the inequality, but differentiable approximations of this function may be employed.

source
NeuralLyapunov.ExponentialStabilityFunction
ExponentialStability(k; rectifier)

Construct a LyapunovDecreaseCondition corresponding to exponential stability of rate $k$.

The Lyapunov condition for exponential stability is $V̇(x) ≤ -k V(x)$ for some $k > 0$.

The inequality is represented by $\texttt{rectifier}(V̇(x) + k V(x)) = 0$. The default value rectifier = (t) -> max(zero(t), t) exactly represents the inequality, but differentiable approximations of this function may be employed.

source
NeuralLyapunov.StabilityISLFunction
StabilityISL(; rectifier)

Construct a LyapunovDecreaseCondition corresponding to stability in the sense of Lyapunov (i.s.L.).

Stability i.s.L. is proven by $V̇(x) ≤ 0$. The inequality is represented by $\texttt{rectifier}(V̇(x)) = 0$. The default value rectifier = (t) -> max(zero(t), t) exactly represents the inequality, but differentiable approximations of this function may be employed.

source

Defining your own decrease condition

If a user wishes to define their own version of the decrease condition in a form other than $\texttt{rate\_metric}(V(x), \dot{V}(x)) \le - \texttt{strength}(x, x_0)$, they must define their own subtype of AbstractLyapunovDecreaseCondition.

NeuralLyapunov.AbstractLyapunovDecreaseConditionType
AbstractLyapunovDecreaseCondition

Represents the decrease condition in a neural Lyapunov problem

All concrete AbstractLyapunovDecreaseCondition subtypes should define the check_decrease and get_decrease_condition functions.

source

When constructing the PDESystem, NeuralLyapunovPDESystem uses check_decrease to determine if it should include an equation equating the result of get_decrease_condition to zero.

NeuralLyapunov.check_decreaseFunction
check_decrease(cond::AbstractLyapunovDecreaseCondition)

Return true if cond specifies training to meet the Lyapunov decrease condition, and false if cond specifies no training to meet this condition.

source
NeuralLyapunov.get_decrease_conditionFunction
get_decrease_condition(cond::AbstractLyapunovDecreaseCondition)

Return a function of $V$, $V̇$, $x$, and $x_0$ that returns zero when the Lyapunov decrease condition is met and a value greater than zero when it is violated.

Note that the first two inputs, $V$ and $V̇$, are functions, so the decrease condition can depend on the value of these functions at multiple points.

source
+strength = (x, x0) -> 0.0

This can also be accomplished with ExponentialStability.

In either case, the rectified linear unit rectifier = (t) -> max(zero(t), t) exactly represents the inequality, but differentiable approximations of this function may be employed.

source

Pre-defined decrease conditions

NeuralLyapunov.AsymptoticStabilityFunction
AsymptoticStability(; C, strength, rectifier)

Construct a LyapunovDecreaseCondition corresponding to asymptotic stability.

The decrease condition for asymptotic stability is $V̇(x) < 0$, which is here represented as $V̇(x) ≤ - \texttt{strength}(x, x_0)$, where strength is positive definite around $x_0$. By default, $\texttt{strength}(x, x_0) = C \lVert x - x_0 \rVert^2$ for the supplied $C > 0$. $C$ defaults to 1e-6.

The inequality is represented by $\texttt{rectifier}(V̇(x) + \texttt{strength}(x, x_0)) = 0$. The default value rectifier = (t) -> max(zero(t), t) exactly represents the inequality, but differentiable approximations of this function may be employed.

source
NeuralLyapunov.ExponentialStabilityFunction
ExponentialStability(k; rectifier)

Construct a LyapunovDecreaseCondition corresponding to exponential stability of rate $k$.

The Lyapunov condition for exponential stability is $V̇(x) ≤ -k V(x)$ for some $k > 0$.

The inequality is represented by $\texttt{rectifier}(V̇(x) + k V(x)) = 0$. The default value rectifier = (t) -> max(zero(t), t) exactly represents the inequality, but differentiable approximations of this function may be employed.

source
NeuralLyapunov.StabilityISLFunction
StabilityISL(; rectifier)

Construct a LyapunovDecreaseCondition corresponding to stability in the sense of Lyapunov (i.s.L.).

Stability i.s.L. is proven by $V̇(x) ≤ 0$. The inequality is represented by $\texttt{rectifier}(V̇(x)) = 0$. The default value rectifier = (t) -> max(zero(t), t) exactly represents the inequality, but differentiable approximations of this function may be employed.

source

Defining your own decrease condition

If a user wishes to define their own version of the decrease condition in a form other than $\texttt{rate\_metric}(V(x), \dot{V}(x)) \le - \texttt{strength}(x, x_0)$, they must define their own subtype of AbstractLyapunovDecreaseCondition.

NeuralLyapunov.AbstractLyapunovDecreaseConditionType
AbstractLyapunovDecreaseCondition

Represents the decrease condition in a neural Lyapunov problem

All concrete AbstractLyapunovDecreaseCondition subtypes should define the check_decrease and get_decrease_condition functions.

source

When constructing the PDESystem, NeuralLyapunovPDESystem uses check_decrease to determine if it should include an equation equating the result of get_decrease_condition to zero.

NeuralLyapunov.check_decreaseFunction
check_decrease(cond::AbstractLyapunovDecreaseCondition)

Return true if cond specifies training to meet the Lyapunov decrease condition, and false if cond specifies no training to meet this condition.

source
NeuralLyapunov.get_decrease_conditionFunction
get_decrease_condition(cond::AbstractLyapunovDecreaseCondition)

Return a function of $V$, $V̇$, $x$, and $x_0$ that returns zero when the Lyapunov decrease condition is met and a value greater than zero when it is violated.

Note that the first two inputs, $V$ and $V̇$, are functions, so the decrease condition can depend on the value of these functions at multiple points.

source
diff --git a/dev/man/index.html b/dev/man/index.html index f5ca5cd..6f32caa 100644 --- a/dev/man/index.html +++ b/dev/man/index.html @@ -1,2 +1,2 @@ -Components of a Neural Lyapunov Problem · NeuralLyapunov.jl

Components of a Neural Lyapunov Problem

For a candidate Lyapunov function $V(x)$ to certify the stability of an equilibrium point $x_0$ of the dynamical system $\frac{dx}{dt} = f(x(t))$, it must satisfy two conditions:

  1. The function $V$ must be uniquely minimized at $x_0$, and
  2. The function $V$ must decrease along system trajectories (i.e., $V(x(t))$ decreases as long as $x(t)$ is a trajectory of the dynamical system).

A neural Lyapunov function represents the candidate Lyapunov function $V$ using a neural network, sometimes modifying the output of the network slightly so as to enforce one of the above conditions.

Thus, we specify our neural Lyapunov problems with three components, each answering a different question:

  1. How is $V$ structured in terms of the neural network?
  2. How is the minimization condition to be enforced?
  3. How is the decrease condition to be enforced?

These three components are represented by the three fields of a NeuralLyapunovSpecification object.

NeuralLyapunov.NeuralLyapunovSpecificationType
NeuralLyapunovSpecification(structure, minimzation_condition, decrease_condition)

Specifies a neural Lyapunov problem.

Fields

source
+Components of a Neural Lyapunov Problem · NeuralLyapunov.jl

Components of a Neural Lyapunov Problem

For a candidate Lyapunov function $V(x)$ to certify the stability of an equilibrium point $x_0$ of the dynamical system $\frac{dx}{dt} = f(x(t))$, it must satisfy two conditions:

  1. The function $V$ must be uniquely minimized at $x_0$, and
  2. The function $V$ must decrease along system trajectories (i.e., $V(x(t))$ decreases as long as $x(t)$ is a trajectory of the dynamical system).

A neural Lyapunov function represents the candidate Lyapunov function $V$ using a neural network, sometimes modifying the output of the network slightly so as to enforce one of the above conditions.

Thus, we specify our neural Lyapunov problems with three components, each answering a different question:

  1. How is $V$ structured in terms of the neural network?
  2. How is the minimization condition to be enforced?
  3. How is the decrease condition to be enforced?

These three components are represented by the three fields of a NeuralLyapunovSpecification object.

NeuralLyapunov.NeuralLyapunovSpecificationType
NeuralLyapunovSpecification(structure, minimzation_condition, decrease_condition)

Specifies a neural Lyapunov problem.

Fields

source
diff --git a/dev/man/internals/index.html b/dev/man/internals/index.html index 87f0b55..b987c29 100644 --- a/dev/man/internals/index.html +++ b/dev/man/internals/index.html @@ -1,2 +1,2 @@ -Internals · NeuralLyapunov.jl

Internals

NeuralLyapunov.phi_to_netFunction
phi_to_net(phi, θ[; idx])

Return the network as a function of state alone.

Arguments

  • phi: the neural network, represented as phi(x, θ) if the neural network has a single output, or a Vector of the same with one entry per neural network output.
  • θ: the parameters of the neural network; θ[:φ1] should be the parameters of the first neural network output (even if there is only one), θ[:φ2] the parameters of the second (if there are multiple), and so on.
  • idx: the neural network outputs to include in the returned function; defaults to all and only applicable when phi isa Vector.
source
+Internals · NeuralLyapunov.jl

Internals

NeuralLyapunov.phi_to_netFunction
phi_to_net(phi, θ[; idx])

Return the network as a function of state alone.

Arguments

  • phi: the neural network, represented as phi(x, θ) if the neural network has a single output, or a Vector of the same with one entry per neural network output.
  • θ: the parameters of the neural network; θ[:φ1] should be the parameters of the first neural network output (even if there is only one), θ[:φ2] the parameters of the second (if there are multiple), and so on.
  • idx: the neural network outputs to include in the returned function; defaults to all and only applicable when phi isa Vector.
source
diff --git a/dev/man/local_lyapunov/index.html b/dev/man/local_lyapunov/index.html index 69f9463..db2ba48 100644 --- a/dev/man/local_lyapunov/index.html +++ b/dev/man/local_lyapunov/index.html @@ -1,2 +1,2 @@ -Local Lyapunov analysis · NeuralLyapunov.jl

Local Lyapunov analysis

For comparison with neural Lyapunov methods, we also provide a function for local Lyapunov analysis by linearization.

NeuralLyapunov.local_lyapunovFunction
local_lyapunov(dynamics, state_dim, optimizer_factory[, jac]; fixed_point, p)

Use semidefinite programming to derive a quadratic Lyapunov function for the linearization of dynamics around fixed_point. Return (V, dV/dt).

To solve the semidefinite program, JuMP.Model requires an optimizer_factory capable of semidefinite programming (SDP). See the JuMP documentation for examples.

If jac is not supplied, the Jacobian of the dynamics(x, p, t) with respect to x is calculated using ForwardDiff. Otherwise, jac is expected to be either a function or an AbstractMatrix. If jac isa Function, it should take in the state and parameters and output the Jacobian of dynamics with respect to the state x. If jac isa AbstractMatrix, it should be the value of the Jacobian at fixed_point.

If fixed_point is not specified, it defaults to the origin, i.e., zeros(state_dim). Parameters p for the dynamics should be supplied when the dynamics depend on them.

source
+Local Lyapunov analysis · NeuralLyapunov.jl

Local Lyapunov analysis

For comparison with neural Lyapunov methods, we also provide a function for local Lyapunov analysis by linearization.

NeuralLyapunov.local_lyapunovFunction
local_lyapunov(dynamics, state_dim, optimizer_factory[, jac]; fixed_point, p)

Use semidefinite programming to derive a quadratic Lyapunov function for the linearization of dynamics around fixed_point. Return (V, dV/dt).

To solve the semidefinite program, JuMP.Model requires an optimizer_factory capable of semidefinite programming (SDP). See the JuMP documentation for examples.

If jac is not supplied, the Jacobian of the dynamics(x, p, t) with respect to x is calculated using ForwardDiff. Otherwise, jac is expected to be either a function or an AbstractMatrix. If jac isa Function, it should take in the state and parameters and output the Jacobian of dynamics with respect to the state x. If jac isa AbstractMatrix, it should be the value of the Jacobian at fixed_point.

If fixed_point is not specified, it defaults to the origin, i.e., zeros(state_dim). Parameters p for the dynamics should be supplied when the dynamics depend on them.

source
diff --git a/dev/man/minimization/index.html b/dev/man/minimization/index.html index c6900dd..5d237ca 100644 --- a/dev/man/minimization/index.html +++ b/dev/man/minimization/index.html @@ -1,2 +1,2 @@ -Lyapunov Minimization Condition · NeuralLyapunov.jl

Lyapunov Minimization Condition

The condition that the Lyapunov function $V(x)$ must be minimized uniquely at the fixed point $x_0$ is often represented as a requirement for $V(x)$ to be positive away from the fixed point and zero at the fixed point. Put mathematically, it is sufficient to require $V(x) > 0 \, \forall x \ne x_0$ and $V(x_0) = 0$. We call such functions positive definite (around the fixed point $x_0$). The weaker condition that $V(x) \ge 0 \, \forall x \ne x_0$ and $V(x_0) = 0$ is positive semi-definiteness.

NeuralLyapunov.jl provides the LyapunovMinimizationCondition struct for users to specify the form of the minimization condition to enforce through training.

NeuralLyapunov.LyapunovMinimizationConditionType
LyapunovMinimizationCondition(check_nonnegativity, strength, rectifier, check_fixed_point)

Specifies the form of the Lyapunov minimization condition to be used.

Fields

  • check_nonnegativity::Bool: whether or not to train for positivity/nonnegativity of $V(x)$
  • strength::Function: specifies the level of strictness for positivity training; should be zero when the two inputs are equal and nonnegative otherwise; used when check_nonnegativity == true
  • rectifier::Function: positive when the input is positive and (approximately) zero when the input is negative
  • check_fixed_point: whether or not to train for $V(x_0) = 0$.

Training conditions

If check_nonnegativity is true, training will attempt to enforce:

$V(x) ≥ \texttt{strength}(x, x_0).$

The inequality will be approximated by the equation:

$\texttt{rectifier}(\texttt{strength}(x, x_0) - V(x_0)) = 0.$

Note that the approximate equation and inequality are identical when $\texttt{rectifier}(t) = \max(0, t)$.

If check_fixed_point is true, then training will also attempt to enforce $V(x_0) = 0$.

Examples

When training for a strictly positive definite $V$, an example of an appropriate strength is $\texttt{strength}(x, x_0) = \lVert x - x_0 \rVert^2$. This form is used in StrictlyPositiveDefinite.

If $V$ were structured such that it is always nonnegative, then $V(x_0) = 0$ is all that must be enforced in training for the Lyapunov function to be uniquely minimized at $x_0$. In that case, we would use check_nonnegativity = false; check_fixed_point = true. This can also be accomplished with DontCheckNonnegativity(true).

In either case, the rectified linear unit rectifier(t) = max(zero(t), t) exactly represents the inequality, but differentiable approximations of this function may be employed.

source

Pre-defined minimization conditions

NeuralLyapunov.StrictlyPositiveDefiniteFunction
StrictlyPositiveDefinite(; C, check_fixed_point, rectifier)

Construct a LyapunovMinimizationCondition representing $V(x) ≥ C \lVert x - x_0 \rVert^2$. If check_fixed_point == true (as is the default), then training will also attempt to enforce $V(x_0) = 0$.

The inequality is approximated by $\texttt{rectifier}(C \lVert x - x_0 \rVert^2 - V(x)) = 0$, and the default rectifier is the rectified linear unit (t) -> max(zero(t), t), which exactly represents $V(x) ≥ C \lVert x - x_0 \rVert^2$. $C$ defaults to 1e-6.

source
NeuralLyapunov.PositiveSemiDefiniteFunction
PositiveSemiDefinite(; check_fixed_point, rectifier)

Construct a LyapunovMinimizationCondition representing $V(x) ≥ 0$. If check_fixed_point == true (as is the default), then training will also attempt to enforce $V(x_0) = 0$.

The inequality is approximated by $\texttt{rectifier}( -V(x) ) = 0$ and the default rectifier is the rectified linear unit (t) -> max(zero(t), t), which exactly represents $V(x) ≥ 0$.

source
NeuralLyapunov.DontCheckNonnegativityFunction
DontCheckNonnegativity(; check_fixed_point)

Construct a LyapunovMinimizationCondition which represents not checking for nonnegativity of the Lyapunov function. This is appropriate in cases where this condition has been structurally enforced.

Even in this case, it is still possible to check for $V(x_0) = 0$, for example if V is structured to be positive for $x ≠ x_0$ but does not guarantee $V(x_0) = 0$ (such as NonnegativeNeuralLyapunov). check_fixed_point defaults to true, since in cases where $V(x_0) = 0$ is enforced structurally, the equation will reduce to 0.0 ~ 0.0, which gets automatically removed in most cases.

source

Defining your own minimization condition

If a user wishes to define their own version of the minimization condition in a form other than $V(x) ≥ \texttt{strength}(x, x_0)$, they must define their own subtype of AbstractLyapunovMinimizationCondition.

NeuralLyapunov.AbstractLyapunovMinimizationConditionType
AbstractLyapunovMinimizationCondition

Represents the minimization condition in a neural Lyapunov problem

All concrete AbstractLyapunovMinimizationCondition subtypes should define the check_nonnegativity, check_fixed_point, and get_minimization_condition functions.

source

When constructing the PDESystem, NeuralLyapunovPDESystem uses check_nonnegativity to determine if it should include an equation equating the result of get_minimization_condition to zero. It additionally uses check_minimal_fixed_point to determine if it should include the equation $V(x_0) = 0$.

NeuralLyapunov.check_nonnegativityFunction
check_nonnegativity(cond::AbstractLyapunovMinimizationCondition)

Return true if cond specifies training to meet the Lyapunov minimization condition, and false if cond specifies no training to meet this condition.

source
NeuralLyapunov.check_minimal_fixed_pointFunction
check_minimal_fixed_point(cond::AbstractLyapunovMinimizationCondition)

Return true if cond specifies training for the Lyapunov function to equal zero at the fixed point, and false if cond specifies no training to meet this condition.

source
NeuralLyapunov.get_minimization_conditionFunction
get_minimization_condition(cond::AbstractLyapunovMinimizationCondition)

Return a function of $V$, $x$, and $x_0$ that equals zero when the Lyapunov minimization condition is met for the Lyapunov candidate function $V$ at the point $x$, and is greater than zero if it's violated.

Note that the first input, $V$, is a function, so the minimization condition can depend on the value of the candidate Lyapunov function at multiple points.

source
+Lyapunov Minimization Condition · NeuralLyapunov.jl

Lyapunov Minimization Condition

The condition that the Lyapunov function $V(x)$ must be minimized uniquely at the fixed point $x_0$ is often represented as a requirement for $V(x)$ to be positive away from the fixed point and zero at the fixed point. Put mathematically, it is sufficient to require $V(x) > 0 \, \forall x \ne x_0$ and $V(x_0) = 0$. We call such functions positive definite (around the fixed point $x_0$). The weaker condition that $V(x) \ge 0 \, \forall x \ne x_0$ and $V(x_0) = 0$ is positive semi-definiteness.

NeuralLyapunov.jl provides the LyapunovMinimizationCondition struct for users to specify the form of the minimization condition to enforce through training.

NeuralLyapunov.LyapunovMinimizationConditionType
LyapunovMinimizationCondition(check_nonnegativity, strength, rectifier, check_fixed_point)

Specifies the form of the Lyapunov minimization condition to be used.

Fields

  • check_nonnegativity::Bool: whether or not to train for positivity/nonnegativity of $V(x)$
  • strength::Function: specifies the level of strictness for positivity training; should be zero when the two inputs are equal and nonnegative otherwise; used when check_nonnegativity == true
  • rectifier::Function: positive when the input is positive and (approximately) zero when the input is negative
  • check_fixed_point: whether or not to train for $V(x_0) = 0$.

Training conditions

If check_nonnegativity is true, training will attempt to enforce:

$V(x) ≥ \texttt{strength}(x, x_0).$

The inequality will be approximated by the equation:

$\texttt{rectifier}(\texttt{strength}(x, x_0) - V(x_0)) = 0.$

Note that the approximate equation and inequality are identical when $\texttt{rectifier}(t) = \max(0, t)$.

If check_fixed_point is true, then training will also attempt to enforce $V(x_0) = 0$.

Examples

When training for a strictly positive definite $V$, an example of an appropriate strength is $\texttt{strength}(x, x_0) = \lVert x - x_0 \rVert^2$. This form is used in StrictlyPositiveDefinite.

If $V$ were structured such that it is always nonnegative, then $V(x_0) = 0$ is all that must be enforced in training for the Lyapunov function to be uniquely minimized at $x_0$. In that case, we would use check_nonnegativity = false; check_fixed_point = true. This can also be accomplished with DontCheckNonnegativity(true).

In either case, the rectified linear unit rectifier(t) = max(zero(t), t) exactly represents the inequality, but differentiable approximations of this function may be employed.

source

Pre-defined minimization conditions

NeuralLyapunov.StrictlyPositiveDefiniteFunction
StrictlyPositiveDefinite(; C, check_fixed_point, rectifier)

Construct a LyapunovMinimizationCondition representing $V(x) ≥ C \lVert x - x_0 \rVert^2$. If check_fixed_point == true (as is the default), then training will also attempt to enforce $V(x_0) = 0$.

The inequality is approximated by $\texttt{rectifier}(C \lVert x - x_0 \rVert^2 - V(x)) = 0$, and the default rectifier is the rectified linear unit (t) -> max(zero(t), t), which exactly represents $V(x) ≥ C \lVert x - x_0 \rVert^2$. $C$ defaults to 1e-6.

source
NeuralLyapunov.PositiveSemiDefiniteFunction
PositiveSemiDefinite(; check_fixed_point, rectifier)

Construct a LyapunovMinimizationCondition representing $V(x) ≥ 0$. If check_fixed_point == true (as is the default), then training will also attempt to enforce $V(x_0) = 0$.

The inequality is approximated by $\texttt{rectifier}( -V(x) ) = 0$ and the default rectifier is the rectified linear unit (t) -> max(zero(t), t), which exactly represents $V(x) ≥ 0$.

source
NeuralLyapunov.DontCheckNonnegativityFunction
DontCheckNonnegativity(; check_fixed_point)

Construct a LyapunovMinimizationCondition which represents not checking for nonnegativity of the Lyapunov function. This is appropriate in cases where this condition has been structurally enforced.

Even in this case, it is still possible to check for $V(x_0) = 0$, for example if V is structured to be positive for $x ≠ x_0$ but does not guarantee $V(x_0) = 0$ (such as NonnegativeNeuralLyapunov). check_fixed_point defaults to true, since in cases where $V(x_0) = 0$ is enforced structurally, the equation will reduce to 0.0 ~ 0.0, which gets automatically removed in most cases.

source

Defining your own minimization condition

If a user wishes to define their own version of the minimization condition in a form other than $V(x) ≥ \texttt{strength}(x, x_0)$, they must define their own subtype of AbstractLyapunovMinimizationCondition.

NeuralLyapunov.AbstractLyapunovMinimizationConditionType
AbstractLyapunovMinimizationCondition

Represents the minimization condition in a neural Lyapunov problem

All concrete AbstractLyapunovMinimizationCondition subtypes should define the check_nonnegativity, check_fixed_point, and get_minimization_condition functions.

source

When constructing the PDESystem, NeuralLyapunovPDESystem uses check_nonnegativity to determine if it should include an equation equating the result of get_minimization_condition to zero. It additionally uses check_minimal_fixed_point to determine if it should include the equation $V(x_0) = 0$.

NeuralLyapunov.check_nonnegativityFunction
check_nonnegativity(cond::AbstractLyapunovMinimizationCondition)

Return true if cond specifies training to meet the Lyapunov minimization condition, and false if cond specifies no training to meet this condition.

source
NeuralLyapunov.check_minimal_fixed_pointFunction
check_minimal_fixed_point(cond::AbstractLyapunovMinimizationCondition)

Return true if cond specifies training for the Lyapunov function to equal zero at the fixed point, and false if cond specifies no training to meet this condition.

source
NeuralLyapunov.get_minimization_conditionFunction
get_minimization_condition(cond::AbstractLyapunovMinimizationCondition)

Return a function of $V$, $x$, and $x_0$ that equals zero when the Lyapunov minimization condition is met for the Lyapunov candidate function $V$ at the point $x$, and is greater than zero if it's violated.

Note that the first input, $V$, is a function, so the minimization condition can depend on the value of the candidate Lyapunov function at multiple points.

source
diff --git a/dev/man/pdesystem/index.html b/dev/man/pdesystem/index.html index 5639a7a..b231340 100644 --- a/dev/man/pdesystem/index.html +++ b/dev/man/pdesystem/index.html @@ -1,4 +1,4 @@ Solving a Neural Lyapunov Problem · NeuralLyapunov.jl

Solving a Neural Lyapunov Problem

NeuralLyapunov.jl represents neural Lyapunov problems as systems of partial differential equations, using the ModelingToolkit.PDESystem type. Such a PDESystem can then be solved using a physics-informed neural network through NeuralPDE.jl.

Candidate Lyapunov functions will be trained within a box domain subset of the state space.

NeuralLyapunov.NeuralLyapunovPDESystemFunction
NeuralLyapunovPDESystem(dynamics::ODESystem, bounds, spec; <keyword_arguments>)
-NeuralLyapunovPDESystem(dynamics::Function, lb, ub, spec; <keyword_arguments>)

Construct a ModelingToolkit.PDESystem representing the specified neural Lyapunov problem.

Positional Arguments

  • dynamics: the dynamical system being analyzed, represented as an ODESystem or the function f such that ẋ = f(x[, u], p, t); either way, the ODE should not depend on time and only t = 0.0 will be used. (For an example of when f would have a u argument, see add_policy_search.)
  • bounds: an array of domains, defining the training domain by bounding the states (and derivatives, when applicable) of dynamics; only used when dynamics isa ODESystem, otherwise use lb and ub.
  • lb and ub: the training domain will be $[lb_1, ub_1]×[lb_2, ub_2]×...$; not used when dynamics isa ODESystem, then use bounds.
  • spec: a NeuralLyapunovSpecification defining the Lyapunov function structure, as well as the minimization and decrease conditions.

Keyword Arguments

  • fixed_point: the equilibrium being analyzed; defaults to the origin.
  • p: the values of the parameters of the dynamical system being analyzed; defaults to SciMLBase.NullParameters(); not used when dynamics isa ODESystem, then use the default parameter values of dynamics.
  • state_syms: an array of the Symbol representing each state; not used when dynamics isa ODESystem, then the symbols from dynamics are used; if dynamics isa ODEFunction, symbols stored there are used, unless overridden here; if not provided here and cannot be inferred, [:state1, :state2, ...] will be used.
  • parameter_syms: an array of the Symbol representing each parameter; not used when dynamics isa ODESystem, then the symbols from dynamics are used; if dynamics isa ODEFunction, symbols stored there are used, unless overridden here; if not provided here and cannot be inferred, [:param1, :param2, ...] will be used.
  • policy_search::Bool: whether or not to include a loss term enforcing fixed_point to actually be a fixed point; defaults to false; only used when dynamics isa Function && !(dynamics isa ODEFunction); when dynamics isa ODEFunction, policy_search should not be supplied (as it must be false); when dynamics isa ODESystem, value inferred by the presence of unbound inputs.
  • name: the name of the constructed PDESystem
source
Warning

When using NeuralLyapunovPDESystem, the Lyapuonv function structure, minimization and decrease conditions, and dynamics will all be symbolically traced to generate the resulting PDESystem equations. In some cases tracing results in more efficient code, but in others it can result in inefficiencies or even errors.

If the generated PDESystem is then used with NeuralPDE.jl, that library's parser will convert the equations into Julia functions representing the loss, which presents another opportunity for unexpected errors.

Extracting the numerical Lyapunov function

We provide the following convenience function for generating the Lyapunov function after the parameters have been found. If the PDESystem was solved using NeuralPDE.jl and Optimization.jl, then the argument phi is a field of the output of NeuralPDE.discretize and the argument θ is res.u.depvar where res is the result of the optimization.

NeuralLyapunov.get_numerical_lyapunov_functionFunction
get_numerical_lyapunov_function(phi, θ, structure, dynamics, fixed_point;
-                                <keyword_arguments>)

Combine Lyapunov function structure, dynamics, and neural network weights to generate Julia functions representing the Lyapunov function and its time derivative: $V(x), V̇(x)$.

These functions can operate on a state vector or columnwise on a matrix of state vectors.

Positional Arguments

  • phi: the neural network, represented as phi(x, θ) if the neural network has a single output, or a Vector of the same with one entry per neural network output.
  • θ: the parameters of the neural network; θ[:φ1] should be the parameters of the first neural network output (even if there is only one), θ[:φ2] the parameters of the second (if there are multiple), and so on.
  • structure: a NeuralLyapunovStructure representing the structure of the neural Lyapunov function.
  • dynamics: the system dynamics, as a function to be used in conjunction with structure.f_call.
  • fixed_point: the equilibrium point being analyzed by the Lyapunov function.

Keyword Arguments

  • p: parameters to be passed into dynamics; defaults to SciMLBase.NullParameters().
  • use_V̇_structure: when true, $V̇(x)$ is calculated using structure.V̇; when false, $V̇(x)$ is calculated using deriv as $\frac{∂}{∂t} V(x + t f(x))$ at $t = 0$; defaults to false, as it is more efficient in many cases.
  • deriv: a function for calculating derivatives; defaults to (and expects same arguments as) ForwardDiff.derivative; only used when use_V̇_structure is false.
  • jac: a function for calculating Jacobians; defaults to (and expects same arguments as) ForwardDiff.jacobian; only used when use_V̇_structure is true.
  • J_net: the Jacobian of the neural network, specified as a function J_net(phi, θ, state); if isnothing(J_net) (as is the default), J_net will be calculated using jac; only used when use_V̇_structure is true.
source
+NeuralLyapunovPDESystem(dynamics::Function, lb, ub, spec; <keyword_arguments>)

Construct a ModelingToolkit.PDESystem representing the specified neural Lyapunov problem.

Positional Arguments

  • dynamics: the dynamical system being analyzed, represented as an ODESystem or the function f such that ẋ = f(x[, u], p, t); either way, the ODE should not depend on time and only t = 0.0 will be used. (For an example of when f would have a u argument, see add_policy_search.)
  • bounds: an array of domains, defining the training domain by bounding the states (and derivatives, when applicable) of dynamics; only used when dynamics isa ODESystem, otherwise use lb and ub.
  • lb and ub: the training domain will be $[lb_1, ub_1]×[lb_2, ub_2]×...$; not used when dynamics isa ODESystem, then use bounds.
  • spec: a NeuralLyapunovSpecification defining the Lyapunov function structure, as well as the minimization and decrease conditions.

Keyword Arguments

  • fixed_point: the equilibrium being analyzed; defaults to the origin.
  • p: the values of the parameters of the dynamical system being analyzed; defaults to SciMLBase.NullParameters(); not used when dynamics isa ODESystem, then use the default parameter values of dynamics.
  • state_syms: an array of the Symbol representing each state; not used when dynamics isa ODESystem, then the symbols from dynamics are used; if dynamics isa ODEFunction, symbols stored there are used, unless overridden here; if not provided here and cannot be inferred, [:state1, :state2, ...] will be used.
  • parameter_syms: an array of the Symbol representing each parameter; not used when dynamics isa ODESystem, then the symbols from dynamics are used; if dynamics isa ODEFunction, symbols stored there are used, unless overridden here; if not provided here and cannot be inferred, [:param1, :param2, ...] will be used.
  • policy_search::Bool: whether or not to include a loss term enforcing fixed_point to actually be a fixed point; defaults to false; only used when dynamics isa Function && !(dynamics isa ODEFunction); when dynamics isa ODEFunction, policy_search should not be supplied (as it must be false); when dynamics isa ODESystem, value inferred by the presence of unbound inputs.
  • name: the name of the constructed PDESystem
source
Warning

When using NeuralLyapunovPDESystem, the Lyapuonv function structure, minimization and decrease conditions, and dynamics will all be symbolically traced to generate the resulting PDESystem equations. In some cases tracing results in more efficient code, but in others it can result in inefficiencies or even errors.

If the generated PDESystem is then used with NeuralPDE.jl, that library's parser will convert the equations into Julia functions representing the loss, which presents another opportunity for unexpected errors.

Extracting the numerical Lyapunov function

We provide the following convenience function for generating the Lyapunov function after the parameters have been found. If the PDESystem was solved using NeuralPDE.jl and Optimization.jl, then the argument phi is a field of the output of NeuralPDE.discretize and the argument θ is res.u.depvar where res is the result of the optimization.

NeuralLyapunov.get_numerical_lyapunov_functionFunction
get_numerical_lyapunov_function(phi, θ, structure, dynamics, fixed_point;
+                                <keyword_arguments>)

Combine Lyapunov function structure, dynamics, and neural network weights to generate Julia functions representing the Lyapunov function and its time derivative: $V(x), V̇(x)$.

These functions can operate on a state vector or columnwise on a matrix of state vectors.

Positional Arguments

  • phi: the neural network, represented as phi(x, θ) if the neural network has a single output, or a Vector of the same with one entry per neural network output.
  • θ: the parameters of the neural network; θ[:φ1] should be the parameters of the first neural network output (even if there is only one), θ[:φ2] the parameters of the second (if there are multiple), and so on.
  • structure: a NeuralLyapunovStructure representing the structure of the neural Lyapunov function.
  • dynamics: the system dynamics, as a function to be used in conjunction with structure.f_call.
  • fixed_point: the equilibrium point being analyzed by the Lyapunov function.

Keyword Arguments

  • p: parameters to be passed into dynamics; defaults to SciMLBase.NullParameters().
  • use_V̇_structure: when true, $V̇(x)$ is calculated using structure.V̇; when false, $V̇(x)$ is calculated using deriv as $\frac{∂}{∂t} V(x + t f(x))$ at $t = 0$; defaults to false, as it is more efficient in many cases.
  • deriv: a function for calculating derivatives; defaults to (and expects same arguments as) ForwardDiff.derivative; only used when use_V̇_structure is false.
  • jac: a function for calculating Jacobians; defaults to (and expects same arguments as) ForwardDiff.jacobian; only used when use_V̇_structure is true.
  • J_net: the Jacobian of the neural network, specified as a function J_net(phi, θ, state); if isnothing(J_net) (as is the default), J_net will be calculated using jac; only used when use_V̇_structure is true.
source
diff --git a/dev/man/policy_search/index.html b/dev/man/policy_search/index.html index aaf2fdd..1030691 100644 --- a/dev/man/policy_search/index.html +++ b/dev/man/policy_search/index.html @@ -1,2 +1,2 @@ -Policy Search and Network-Dependent Dynamics · NeuralLyapunov.jl

Policy Search and Network-Dependent Dynamics

At times, we wish to model a component of the dynamics with a neural network. A common example is the policy search case, when the closed-loop dynamics include a neural network controller. In such cases, we consider the dynamics to take the form of $\frac{dx}{dt} = f(x, u, p, t)$, where $u$ is the control input/the contribution to the dynamics from the neural network. We provide the add_policy_search function to transform a NeuralLyapunovStructure to include training the neural network to represent not just the Lyapunov function, but also the relevant part of the dynamics.

Similar to get_numerical_lyapunov_function, we provide the get_policy convenience function to construct $u(x)$ that can be combined with the open-loop dynamics $f(x, u, p, t)$ to create closed loop dynamics $f_{cl}(x, p, t) = f(x, u(x), p, t)$.

NeuralLyapunov.add_policy_searchFunction
add_policy_search(lyapunov_structure, new_dims; control_structure)

Add dependence on the neural network to the dynamics in a NeuralLyapunovStructure.

Arguments

  • lyapunov_structure::NeuralLyapunovStructure: provides structure for $V, V̇$; should assume dynamics take a form of f(x, p, t).
  • new_dims::Integer: number of outputs of the neural network to pass into the dynamics through control_structure.

Keyword Arguments

  • control_structure::Function: transforms the final new_dims outputs of the neural net before passing them into the dynamics; defaults to identity, passing in the neural network outputs unchanged.

The returned NeuralLyapunovStructure expects dynamics of the form f(x, u, p, t), where u captures the dependence of dynamics on the neural network (e.g., through a control input). When evaluating the dynamics, it uses u = control_structure(phi_end(x)) where phi_end is a function that returns the final new_dims outputs of the neural network. The other lyapunov_structure.network_dim outputs are used for calculating $V$ and $V̇$, as specified originally by lyapunov_structure.

source
NeuralLyapunov.get_policyFunction
get_policy(phi, θ, network_dim, control_dim; control_structure)

Generate a Julia function representing the control policy/unmodeled portion of the dynamics as a function of the state.

The returned function can operate on a state vector or columnwise on a matrix of state vectors.

Arguments

  • phi: the neural network, represented as phi(state, θ) if the neural network has a single output, or a Vector of the same with one entry per neural network output.
  • θ: the parameters of the neural network; θ[:φ1] should be the parameters of the first neural network output (even if there is only one), θ[:φ2] the parameters of the second (if there are multiple), and so on.
  • network_dim: total number of neural network outputs.
  • control_dim: number of neural network outputs used in the control policy.

Keyword Arguments

  • control_structure: transforms the final control_dim outputs of the neural net before passing them into the dynamics; defaults to identity, passing in the neural network outputs unchanged.
source
+Policy Search and Network-Dependent Dynamics · NeuralLyapunov.jl

Policy Search and Network-Dependent Dynamics

At times, we wish to model a component of the dynamics with a neural network. A common example is the policy search case, when the closed-loop dynamics include a neural network controller. In such cases, we consider the dynamics to take the form of $\frac{dx}{dt} = f(x, u, p, t)$, where $u$ is the control input/the contribution to the dynamics from the neural network. We provide the add_policy_search function to transform a NeuralLyapunovStructure to include training the neural network to represent not just the Lyapunov function, but also the relevant part of the dynamics.

Similar to get_numerical_lyapunov_function, we provide the get_policy convenience function to construct $u(x)$ that can be combined with the open-loop dynamics $f(x, u, p, t)$ to create closed loop dynamics $f_{cl}(x, p, t) = f(x, u(x), p, t)$.

NeuralLyapunov.add_policy_searchFunction
add_policy_search(lyapunov_structure, new_dims; control_structure)

Add dependence on the neural network to the dynamics in a NeuralLyapunovStructure.

Arguments

  • lyapunov_structure::NeuralLyapunovStructure: provides structure for $V, V̇$; should assume dynamics take a form of f(x, p, t).
  • new_dims::Integer: number of outputs of the neural network to pass into the dynamics through control_structure.

Keyword Arguments

  • control_structure::Function: transforms the final new_dims outputs of the neural net before passing them into the dynamics; defaults to identity, passing in the neural network outputs unchanged.

The returned NeuralLyapunovStructure expects dynamics of the form f(x, u, p, t), where u captures the dependence of dynamics on the neural network (e.g., through a control input). When evaluating the dynamics, it uses u = control_structure(phi_end(x)) where phi_end is a function that returns the final new_dims outputs of the neural network. The other lyapunov_structure.network_dim outputs are used for calculating $V$ and $V̇$, as specified originally by lyapunov_structure.

source
NeuralLyapunov.get_policyFunction
get_policy(phi, θ, network_dim, control_dim; control_structure)

Generate a Julia function representing the control policy/unmodeled portion of the dynamics as a function of the state.

The returned function can operate on a state vector or columnwise on a matrix of state vectors.

Arguments

  • phi: the neural network, represented as phi(state, θ) if the neural network has a single output, or a Vector of the same with one entry per neural network output.
  • θ: the parameters of the neural network; θ[:φ1] should be the parameters of the first neural network output (even if there is only one), θ[:φ2] the parameters of the second (if there are multiple), and so on.
  • network_dim: total number of neural network outputs.
  • control_dim: number of neural network outputs used in the control policy.

Keyword Arguments

  • control_structure: transforms the final control_dim outputs of the neural net before passing them into the dynamics; defaults to identity, passing in the neural network outputs unchanged.
source
diff --git a/dev/man/roa/index.html b/dev/man/roa/index.html index 5140722..c958758 100644 --- a/dev/man/roa/index.html +++ b/dev/man/roa/index.html @@ -1,4 +1,4 @@ -Training for Region of Attraction Identification · NeuralLyapunov.jl

Training for Region of Attraction Identification

Satisfying the minimization and decrease conditions within the training region (or any region around the fixed point, however small) is sufficient for proving local stability. In many cases, however, we desire an estimate of the region of attraction, rather than simply a guarantee of local stability.

Any compact sublevel set wherein the minimization and decrease conditions are satisfied is an inner estimate of the region of attraction. Therefore, we can restrict training for those conditions to only within a predetermined sublevel set $\{ x : V(x) \le \rho \}$. To do so, define a LyapunovDecreaseCondition as usual and then pass it through the make_RoA_aware function, which returns an analogous RoAAwareDecreaseCondition.

NeuralLyapunov.make_RoA_awareFunction
make_RoA_aware(cond; ρ, out_of_RoA_penalty, sigmoid)

Add awareness of the region of attraction (RoA) estimation task to the supplied LyapunovDecreaseCondition.

When estimating the region of attraction using a Lyapunov function, the decrease condition only needs to be met within a bounded sublevel set $\{ x : V(x) ≤ ρ \}$. The returned RoAAwareDecreaseCondition enforces the decrease condition represented by cond only in that sublevel set.

Arguments

  • cond::LyapunovDecreaseCondition: specifies the loss to be applied when $V(x) ≤ ρ$.

Keyword Arguments

  • ρ: the target level such that the RoA will be $\{ x : V(x) ≤ ρ \}$, defaults to 1.
  • out_of_RoA_penalty::Function: specifies the loss to be applied when $V(x) > ρ$, defaults to no loss.
  • sigmoid::Function: approximately one when the input is positive and approximately zero when the input is negative, defaults to unit step function.

The loss applied to samples $x$ such that $V(x) > ρ$ is $\lvert \texttt{out\_of\_RoA\_penalty}(V(x), V̇(x), x, x_0, ρ) \rvert^2$.

The sigmoid function allows for a smooth transition between the $V(x) ≤ ρ$ case and the $V(x) > ρ$ case, by combining the above equations into one:

$\texttt{sigmoid}(ρ - V(x)) (\text{in-RoA expression}) + \texttt{sigmoid}(V(x) - ρ) (\text{out-of-RoA expression}) = 0$.

Note that a hard transition, which only enforces the in-RoA equation when $V(x) ≤ ρ$ and the out-of-RoA equation when $V(x) > ρ$ can be provided by a sigmoid which is exactly one when the input is nonnegative and exactly zero when the input is negative. As such, the default value is sigmoid(t) = t ≥ zero(t).

See also: RoAAwareDecreaseCondition

source
NeuralLyapunov.RoAAwareDecreaseConditionType
RoAAwareDecreaseCondition(check_decrease, rate_metric, strength, rectifier, ρ, out_of_RoA_penalty)

Specifies the form of the Lyapunov decrease condition to be used, training for a region of attraction estimate of $\{ x : V(x) ≤ ρ \}$.

Fields

  • check_decrease::Bool: whether or not to train for negativity/nonpositivity of $V̇(x)$.
  • rate_metric::Function: should increase with $V̇(x)$; used when check_decrease == true.
  • strength::Function: specifies the level of strictness for negativity training; should be zero when the two inputs are equal and nonnegative otherwise; used when check_decrease == true.
  • rectifier::Function: positive when the input is positive and (approximately) zero when the input is negative.
  • sigmoid::Function: approximately one when the input is positive and approximately zero when the input is negative.
  • ρ: the level of the sublevel set forming the estimate of the region of attraction.
  • out_of_RoA_penalty::Function: a loss function to be applied penalizing points outside the sublevel set forming the region of attraction estimate.

Training conditions

If check_decrease == true, training will attempt to enforce

$\texttt{rate\_metric}(V(x), V̇(x)) ≤ - \texttt{strength}(x, x_0)$

whenever $V(x) ≤ ρ$, and will instead apply a loss of

$\lvert \texttt{out\_of\_RoA\_penalty}(V(x), V̇(x), x, x_0, ρ) \rvert^2$

when $V(x) > ρ$.

The inequality will be approximated by the equation

$\texttt{rectifier}(\texttt{rate\_metric}(V(x), V̇(x)) + \texttt{strength}(x, x_0)) = 0$.

Note that the approximate equation and inequality are identical when $\texttt{rectifier}(t) = \max(0, t)$.

The sigmoid function allows for a smooth transition between the $V(x) ≤ ρ$ case and the $V(x) > ρ$ case, by combining the above equations into one:

$\texttt{sigmoid}(ρ - V(x)) (\text{in-RoA expression}) + \texttt{sigmoid}(V(x) - ρ) (\text{out-of-RoA expression}) = 0$.

Note that a hard transition, which only enforces the in-RoA equation when $V(x) ≤ ρ$ and the out-of-RoA equation when $V(x) > ρ$ can be provided by a sigmoid which is exactly one when the input is nonnegative and exactly zero when the input is negative.

If the dynamics truly have a fixed point at $x_0$ and $V̇(x)$ is truly the rate of decrease of $V(x)$ along the dynamics, then $V̇(x_0)$ will be $0$ and there is no need to train for $V̇(x_0) = 0$.

Examples:

Asymptotic decrease can be enforced by requiring $V̇(x) ≤ -C \lVert x - x_0 \rVert^2$, for some positive $C$, which corresponds to

rate_metric = (V, dVdt) -> dVdt
+Training for Region of Attraction Identification · NeuralLyapunov.jl

Training for Region of Attraction Identification

Satisfying the minimization and decrease conditions within the training region (or any region around the fixed point, however small) is sufficient for proving local stability. In many cases, however, we desire an estimate of the region of attraction, rather than simply a guarantee of local stability.

Any compact sublevel set wherein the minimization and decrease conditions are satisfied is an inner estimate of the region of attraction. Therefore, we can restrict training for those conditions to only within a predetermined sublevel set $\{ x : V(x) \le \rho \}$. To do so, define a LyapunovDecreaseCondition as usual and then pass it through the make_RoA_aware function, which returns an analogous RoAAwareDecreaseCondition.

NeuralLyapunov.make_RoA_awareFunction
make_RoA_aware(cond; ρ, out_of_RoA_penalty, sigmoid)

Add awareness of the region of attraction (RoA) estimation task to the supplied LyapunovDecreaseCondition.

When estimating the region of attraction using a Lyapunov function, the decrease condition only needs to be met within a bounded sublevel set $\{ x : V(x) ≤ ρ \}$. The returned RoAAwareDecreaseCondition enforces the decrease condition represented by cond only in that sublevel set.

Arguments

  • cond::LyapunovDecreaseCondition: specifies the loss to be applied when $V(x) ≤ ρ$.

Keyword Arguments

  • ρ: the target level such that the RoA will be $\{ x : V(x) ≤ ρ \}$, defaults to 1.
  • out_of_RoA_penalty::Function: specifies the loss to be applied when $V(x) > ρ$, defaults to no loss.
  • sigmoid::Function: approximately one when the input is positive and approximately zero when the input is negative, defaults to unit step function.

The loss applied to samples $x$ such that $V(x) > ρ$ is $\lvert \texttt{out\_of\_RoA\_penalty}(V(x), V̇(x), x, x_0, ρ) \rvert^2$.

The sigmoid function allows for a smooth transition between the $V(x) ≤ ρ$ case and the $V(x) > ρ$ case, by combining the above equations into one:

$\texttt{sigmoid}(ρ - V(x)) (\text{in-RoA expression}) + \texttt{sigmoid}(V(x) - ρ) (\text{out-of-RoA expression}) = 0$.

Note that a hard transition, which only enforces the in-RoA equation when $V(x) ≤ ρ$ and the out-of-RoA equation when $V(x) > ρ$ can be provided by a sigmoid which is exactly one when the input is nonnegative and exactly zero when the input is negative. As such, the default value is sigmoid(t) = t ≥ zero(t).

See also: RoAAwareDecreaseCondition

source
NeuralLyapunov.RoAAwareDecreaseConditionType
RoAAwareDecreaseCondition(check_decrease, rate_metric, strength, rectifier, ρ, out_of_RoA_penalty)

Specifies the form of the Lyapunov decrease condition to be used, training for a region of attraction estimate of $\{ x : V(x) ≤ ρ \}$.

Fields

  • check_decrease::Bool: whether or not to train for negativity/nonpositivity of $V̇(x)$.
  • rate_metric::Function: should increase with $V̇(x)$; used when check_decrease == true.
  • strength::Function: specifies the level of strictness for negativity training; should be zero when the two inputs are equal and nonnegative otherwise; used when check_decrease == true.
  • rectifier::Function: positive when the input is positive and (approximately) zero when the input is negative.
  • sigmoid::Function: approximately one when the input is positive and approximately zero when the input is negative.
  • ρ: the level of the sublevel set forming the estimate of the region of attraction.
  • out_of_RoA_penalty::Function: a loss function to be applied penalizing points outside the sublevel set forming the region of attraction estimate.

Training conditions

If check_decrease == true, training will attempt to enforce

$\texttt{rate\_metric}(V(x), V̇(x)) ≤ - \texttt{strength}(x, x_0)$

whenever $V(x) ≤ ρ$, and will instead apply a loss of

$\lvert \texttt{out\_of\_RoA\_penalty}(V(x), V̇(x), x, x_0, ρ) \rvert^2$

when $V(x) > ρ$.

The inequality will be approximated by the equation

$\texttt{rectifier}(\texttt{rate\_metric}(V(x), V̇(x)) + \texttt{strength}(x, x_0)) = 0$.

Note that the approximate equation and inequality are identical when $\texttt{rectifier}(t) = \max(0, t)$.

The sigmoid function allows for a smooth transition between the $V(x) ≤ ρ$ case and the $V(x) > ρ$ case, by combining the above equations into one:

$\texttt{sigmoid}(ρ - V(x)) (\text{in-RoA expression}) + \texttt{sigmoid}(V(x) - ρ) (\text{out-of-RoA expression}) = 0$.

Note that a hard transition, which only enforces the in-RoA equation when $V(x) ≤ ρ$ and the out-of-RoA equation when $V(x) > ρ$ can be provided by a sigmoid which is exactly one when the input is nonnegative and exactly zero when the input is negative.

If the dynamics truly have a fixed point at $x_0$ and $V̇(x)$ is truly the rate of decrease of $V(x)$ along the dynamics, then $V̇(x_0)$ will be $0$ and there is no need to train for $V̇(x_0) = 0$.

Examples:

Asymptotic decrease can be enforced by requiring $V̇(x) ≤ -C \lVert x - x_0 \rVert^2$, for some positive $C$, which corresponds to

rate_metric = (V, dVdt) -> dVdt
 strength = (x, x0) -> C * (x - x0) ⋅ (x - x0)

Exponential decrease of rate $k$ is proven by $V̇(x) ≤ - k V(x)$, which corresponds to

rate_metric = (V, dVdt) -> dVdt + k * V
-strength = (x, x0) -> 0.0

Enforcing either condition only in the region of attraction and not penalizing any points outside that region would correspond to

out_of_RoA_penalty = (V, dVdt, state, fixed_point, ρ) -> 0.0

whereas an example of a penalty that decays farther in state space from the fixed point is

out_of_RoA_penalty = (V, dVdt, x, x0, ρ) -> 1.0 / ((x - x0) ⋅ (x - x0))

Note that this penalty could also depend on values of $V$ and $V̇$ at various points, as well as $ρ$.

In any of these cases, the rectified linear unit rectifier = (t) -> max(zero(t), t) exactly represents the inequality, but differentiable approximations of this function may be employed.

See also: LyapunovDecreaseCondition

source
+strength = (x, x0) -> 0.0

Enforcing either condition only in the region of attraction and not penalizing any points outside that region would correspond to

out_of_RoA_penalty = (V, dVdt, state, fixed_point, ρ) -> 0.0

whereas an example of a penalty that decays farther in state space from the fixed point is

out_of_RoA_penalty = (V, dVdt, x, x0, ρ) -> 1.0 / ((x - x0) ⋅ (x - x0))

Note that this penalty could also depend on values of $V$ and $V̇$ at various points, as well as $ρ$.

In any of these cases, the rectified linear unit rectifier = (t) -> max(zero(t), t) exactly represents the inequality, but differentiable approximations of this function may be employed.

See also: LyapunovDecreaseCondition

source
diff --git a/dev/man/structure/index.html b/dev/man/structure/index.html index 2731ad4..2811e4e 100644 --- a/dev/man/structure/index.html +++ b/dev/man/structure/index.html @@ -1,2 +1,2 @@ -Structuring a Neural Lyapunov function · NeuralLyapunov.jl

Structuring a Neural Lyapunov function

Three simple neural Lyapunov function structures are provided, but users can always specify a custom structure using the NeuralLyapunovStructure struct.

Pre-defined structures

The simplest structure is to train the neural network directly to be the Lyapunov function, which can be accomplished using an UnstructuredNeuralLyapunov.

NeuralLyapunov.UnstructuredNeuralLyapunovFunction
UnstructuredNeuralLyapunov()

Create a NeuralLyapunovStructure where the Lyapunov function is the neural network evaluated at the state. This does not structurally enforce any Lyapunov conditions.

Corresponds to $V(x) = ϕ(x)$, where $ϕ$ is the neural network.

Dynamics are assumed to be in f(state, p, t) form, as in an ODEFunction. For f(state, input, p, t), consider using add_policy_search.

source

The condition that the Lyapunov function $V(x)$ must be minimized uniquely at the fixed point $x_0$ is often represented as a requirement for $V(x)$ to be positive away from the fixed point and zero at the fixed point. Put mathematically, it is sufficient to require $V(x) > 0 \, \forall x \ne x_0$ and $V(x_0) = 0$. We call such functions positive definite (around the fixed point $x_0$).

Two structures are provided which partially or fully enforce the minimization condition: NonnegativeNeuralLyapunov, which structurally enforces $V(x) \ge 0$ everywhere, and PositiveSemiDefiniteStructure, which additionally enforces $V(x_0) = 0$.

NeuralLyapunov.NonnegativeNeuralLyapunovFunction
NonnegativeNeuralLyapunov(network_dim; <keyword_arguments>)

Create a NeuralLyapunovStructure where the Lyapunov function is the L2 norm of the neural network output plus a constant δ times a function pos_def.

Corresponds to $V(x) = \lVert ϕ(x) \rVert^2 + δ \, \texttt{pos\_def}(x, x_0)$, where $ϕ$ is the neural network and $x_0$ is the equilibrium point.

This structure ensures $V(x) ≥ 0 \, ∀ x$ when $δ ≥ 0$ and pos_def is always nonnegative. Further, if $δ > 0$ and pos_def is strictly positive definite around fixed_point, the structure ensures that $V(x)$ is strictly positive away from fixed_point. In such cases, the minimization condition reduces to ensuring $V(x_0) = 0$, and so DontCheckNonnegativity(true) should be used.

Arguments

  • network_dim: output dimensionality of the neural network.

Keyword Arguments

  • δ: weight of pos_def, as above; defaults to 0.
  • pos_def(state, fixed_point): a function that is postive (semi-)definite in state around fixed_point; defaults to $\log(1 + \lVert x - x_0 \rVert^2)$.
  • grad_pos_def(state, fixed_point): the gradient of pos_def with respect to state at state. If isnothing(grad_pos_def) (as is the default), the gradient of pos_def will be evaluated using grad.
  • grad: a function for evaluating gradients to be used when isnothing(grad_pos_def); defaults to, and expects the same arguments as, ForwardDiff.gradient.

Dynamics are assumed to be in f(state, p, t) form, as in an ODEFunction. For f(state, input, p, t), consider using add_policy_search.

See also: DontCheckNonnegativity

source
NeuralLyapunov.PositiveSemiDefiniteStructureFunction
PositiveSemiDefiniteStructure(network_dim; <keyword_arguments>)

Create a NeuralLyapunovStructure where the Lyapunov function is the product of a positive (semi-)definite function pos_def which does not depend on the network and a nonnegative function non_neg which does depend the network.

Corresponds to $V(x) = \texttt{pos\_def}(x, x_0) * \texttt{non\_neg}(ϕ, x, x_0)$, where $ϕ$ is the neural network and $x_0$ is the equilibrium point.

This structure ensures $V(x) ≥ 0$. Further, if pos_def is strictly positive definite fixed_point and non_neg is strictly positive (as is the case for the default values of pos_def and non_neg), then this structure ensures $V(x)$ is strictly positive definite around fixed_point. In such cases, the minimization condition is satisfied structurally, so DontCheckNonnegativity(false) should be used.

Arguments

  • network_dim: output dimensionality of the neural network.

Keyword Arguments

  • pos_def(state, fixed_point): a function that is postive (semi-)definite in state around fixed_point; defaults to $\log(1 + \lVert x - x_0 \rVert^2)$.
  • non_neg(net, state, fixed_point): a nonnegative function of the neural network; note that net is the neural network $ϕ$, and net(state) is the value of the neural network at a point $ϕ(x)$; defaults to $1 + \lVert ϕ(x) \rVert^2$.
  • grad_pos_def(state, fixed_point): the gradient of pos_def with respect to state at state. If isnothing(grad_pos_def) (as is the default), the gradient of pos_def will be evaluated using grad.
  • grad_non_neg(net, J_net, state, fixed_point): the gradient of non_neg with respect to state at state; J_net is a function outputting the Jacobian of net at the input. If isnothing(grad_non_neg) (as is the default), the gradient of non_neg will be evaluated using grad.
  • grad: a function for evaluating gradients to be used when isnothing(grad_pos_def) || isnothing(grad_non_neg); defaults to, and expects the same arguments as, ForwardDiff.gradient.

Dynamics are assumed to be in f(state, p, t) form, as in an ODEFunction. For f(state, input, p, t), consider using add_policy_search.

See also: DontCheckNonnegativity

source

Defining your own neural Lyapunov function structure

To define a new structure for a neural Lyapunov function, one must specify the form of the Lyapunov candidate $V$ and its time derivative along a trajectory $\dot{V}$, as well as how to call the dynamics $f$. Additionally, the dimensionality of the output of the neural network must be known in advance.

NeuralLyapunov.NeuralLyapunovStructureType
NeuralLyapunovStructure(V, V̇, f_call, network_dim)

Specifies the structure of the neural Lyapunov function and its derivative.

Allows the user to define the Lyapunov in terms of the neural network, potentially structurally enforcing some Lyapunov conditions.

Fields

  • V(phi::Function, state, fixed_point): outputs the value of the Lyapunov function at state.
  • V̇(phi::Function, J_phi::Function, dynamics::Function, state, params, t, fixed_point): outputs the time derivative of the Lyapunov function at state.
  • f_call(dynamics::Function, phi::Function, state, params, t): outputs the derivative of the state; this is useful for making closed-loop dynamics which depend on the neural network, such as in the policy search case.
  • network_dim: the dimension of the output of the neural network.

phi and J_phi above are both functions of state alone.

source

Calling the dynamics

Very generally, the dynamical system can be a system of ODEs $\dot{x} = f(x, u, p, t)$, where $u$ is a control input, $p$ contains parameters, and $f$ depends on the neural network in some way. To capture this variety, users must supply the function f_call(dynamics::Function, phi::Function, state, p, t).

The most common example is when dynamics takes the same form as an ODEFunction. i.e., $\dot{x} = \texttt{dynamics}(x, p, t)$. In that case, f_call(dynamics, phi, state, p, t) = dynamics(state, p, t).

Suppose instead, the dynamics were supplied as a function of state alone: $\dot{x} = \texttt{dynamics}(x)$. Then, f_call(dynamics, phi, state, p, t) = dynamics(state).

Finally, suppose $\dot{x} = \texttt{dynamics}(x, u, p, t)$ has a unidimensional control input that is being trained (via policy search) to be the second output of the neural network. Then f_call(dynamics, phi, state, p, t) = dynamics(state, phi(state)[2], p, t).

Note that, despite the inclusion of the time variable $t$, NeuralLyapunov.jl currently only supports time-invariant systems, so only t = 0.0 is used.

Structuring $V$ and $\dot{V}$

The Lyapunov candidate function $V$ gets specified as a function V(phi, state, fixed_point), where phi is the neural network as a function phi(state). Note that this form allows $V(x)$ to depend on the neural network evaluated at points other than just the input $x$.

The time derivative $\dot{V}$ is similarly defined by a function V̇(phi, J_phi, dynamics, state, params, t, fixed_point). The function J_phi(state) gives the Jacobian of the neural network phi at state. The function dynamics is as above (with parameters params).

+Structuring a Neural Lyapunov function · NeuralLyapunov.jl

Structuring a Neural Lyapunov function

Three simple neural Lyapunov function structures are provided, but users can always specify a custom structure using the NeuralLyapunovStructure struct.

Pre-defined structures

The simplest structure is to train the neural network directly to be the Lyapunov function, which can be accomplished using an UnstructuredNeuralLyapunov.

NeuralLyapunov.UnstructuredNeuralLyapunovFunction
UnstructuredNeuralLyapunov()

Create a NeuralLyapunovStructure where the Lyapunov function is the neural network evaluated at the state. This does not structurally enforce any Lyapunov conditions.

Corresponds to $V(x) = ϕ(x)$, where $ϕ$ is the neural network.

Dynamics are assumed to be in f(state, p, t) form, as in an ODEFunction. For f(state, input, p, t), consider using add_policy_search.

source

The condition that the Lyapunov function $V(x)$ must be minimized uniquely at the fixed point $x_0$ is often represented as a requirement for $V(x)$ to be positive away from the fixed point and zero at the fixed point. Put mathematically, it is sufficient to require $V(x) > 0 \, \forall x \ne x_0$ and $V(x_0) = 0$. We call such functions positive definite (around the fixed point $x_0$).

Two structures are provided which partially or fully enforce the minimization condition: NonnegativeNeuralLyapunov, which structurally enforces $V(x) \ge 0$ everywhere, and PositiveSemiDefiniteStructure, which additionally enforces $V(x_0) = 0$.

NeuralLyapunov.NonnegativeNeuralLyapunovFunction
NonnegativeNeuralLyapunov(network_dim; <keyword_arguments>)

Create a NeuralLyapunovStructure where the Lyapunov function is the L2 norm of the neural network output plus a constant δ times a function pos_def.

Corresponds to $V(x) = \lVert ϕ(x) \rVert^2 + δ \, \texttt{pos\_def}(x, x_0)$, where $ϕ$ is the neural network and $x_0$ is the equilibrium point.

This structure ensures $V(x) ≥ 0 \, ∀ x$ when $δ ≥ 0$ and pos_def is always nonnegative. Further, if $δ > 0$ and pos_def is strictly positive definite around fixed_point, the structure ensures that $V(x)$ is strictly positive away from fixed_point. In such cases, the minimization condition reduces to ensuring $V(x_0) = 0$, and so DontCheckNonnegativity(true) should be used.

Arguments

  • network_dim: output dimensionality of the neural network.

Keyword Arguments

  • δ: weight of pos_def, as above; defaults to 0.
  • pos_def(state, fixed_point): a function that is postive (semi-)definite in state around fixed_point; defaults to $\log(1 + \lVert x - x_0 \rVert^2)$.
  • grad_pos_def(state, fixed_point): the gradient of pos_def with respect to state at state. If isnothing(grad_pos_def) (as is the default), the gradient of pos_def will be evaluated using grad.
  • grad: a function for evaluating gradients to be used when isnothing(grad_pos_def); defaults to, and expects the same arguments as, ForwardDiff.gradient.

Dynamics are assumed to be in f(state, p, t) form, as in an ODEFunction. For f(state, input, p, t), consider using add_policy_search.

See also: DontCheckNonnegativity

source
NeuralLyapunov.PositiveSemiDefiniteStructureFunction
PositiveSemiDefiniteStructure(network_dim; <keyword_arguments>)

Create a NeuralLyapunovStructure where the Lyapunov function is the product of a positive (semi-)definite function pos_def which does not depend on the network and a nonnegative function non_neg which does depend the network.

Corresponds to $V(x) = \texttt{pos\_def}(x, x_0) * \texttt{non\_neg}(ϕ, x, x_0)$, where $ϕ$ is the neural network and $x_0$ is the equilibrium point.

This structure ensures $V(x) ≥ 0$. Further, if pos_def is strictly positive definite fixed_point and non_neg is strictly positive (as is the case for the default values of pos_def and non_neg), then this structure ensures $V(x)$ is strictly positive definite around fixed_point. In such cases, the minimization condition is satisfied structurally, so DontCheckNonnegativity(false) should be used.

Arguments

  • network_dim: output dimensionality of the neural network.

Keyword Arguments

  • pos_def(state, fixed_point): a function that is postive (semi-)definite in state around fixed_point; defaults to $\log(1 + \lVert x - x_0 \rVert^2)$.
  • non_neg(net, state, fixed_point): a nonnegative function of the neural network; note that net is the neural network $ϕ$, and net(state) is the value of the neural network at a point $ϕ(x)$; defaults to $1 + \lVert ϕ(x) \rVert^2$.
  • grad_pos_def(state, fixed_point): the gradient of pos_def with respect to state at state. If isnothing(grad_pos_def) (as is the default), the gradient of pos_def will be evaluated using grad.
  • grad_non_neg(net, J_net, state, fixed_point): the gradient of non_neg with respect to state at state; J_net is a function outputting the Jacobian of net at the input. If isnothing(grad_non_neg) (as is the default), the gradient of non_neg will be evaluated using grad.
  • grad: a function for evaluating gradients to be used when isnothing(grad_pos_def) || isnothing(grad_non_neg); defaults to, and expects the same arguments as, ForwardDiff.gradient.

Dynamics are assumed to be in f(state, p, t) form, as in an ODEFunction. For f(state, input, p, t), consider using add_policy_search.

See also: DontCheckNonnegativity

source

Defining your own neural Lyapunov function structure

To define a new structure for a neural Lyapunov function, one must specify the form of the Lyapunov candidate $V$ and its time derivative along a trajectory $\dot{V}$, as well as how to call the dynamics $f$. Additionally, the dimensionality of the output of the neural network must be known in advance.

NeuralLyapunov.NeuralLyapunovStructureType
NeuralLyapunovStructure(V, V̇, f_call, network_dim)

Specifies the structure of the neural Lyapunov function and its derivative.

Allows the user to define the Lyapunov in terms of the neural network, potentially structurally enforcing some Lyapunov conditions.

Fields

  • V(phi::Function, state, fixed_point): outputs the value of the Lyapunov function at state.
  • V̇(phi::Function, J_phi::Function, dynamics::Function, state, params, t, fixed_point): outputs the time derivative of the Lyapunov function at state.
  • f_call(dynamics::Function, phi::Function, state, params, t): outputs the derivative of the state; this is useful for making closed-loop dynamics which depend on the neural network, such as in the policy search case.
  • network_dim: the dimension of the output of the neural network.

phi and J_phi above are both functions of state alone.

source

Calling the dynamics

Very generally, the dynamical system can be a system of ODEs $\dot{x} = f(x, u, p, t)$, where $u$ is a control input, $p$ contains parameters, and $f$ depends on the neural network in some way. To capture this variety, users must supply the function f_call(dynamics::Function, phi::Function, state, p, t).

The most common example is when dynamics takes the same form as an ODEFunction. i.e., $\dot{x} = \texttt{dynamics}(x, p, t)$. In that case, f_call(dynamics, phi, state, p, t) = dynamics(state, p, t).

Suppose instead, the dynamics were supplied as a function of state alone: $\dot{x} = \texttt{dynamics}(x)$. Then, f_call(dynamics, phi, state, p, t) = dynamics(state).

Finally, suppose $\dot{x} = \texttt{dynamics}(x, u, p, t)$ has a unidimensional control input that is being trained (via policy search) to be the second output of the neural network. Then f_call(dynamics, phi, state, p, t) = dynamics(state, phi(state)[2], p, t).

Note that, despite the inclusion of the time variable $t$, NeuralLyapunov.jl currently only supports time-invariant systems, so only t = 0.0 is used.

Structuring $V$ and $\dot{V}$

The Lyapunov candidate function $V$ gets specified as a function V(phi, state, fixed_point), where phi is the neural network as a function phi(state). Note that this form allows $V(x)$ to depend on the neural network evaluated at points other than just the input $x$.

The time derivative $\dot{V}$ is similarly defined by a function V̇(phi, J_phi, dynamics, state, params, t, fixed_point). The function J_phi(state) gives the Jacobian of the neural network phi at state. The function dynamics is as above (with parameters params).