-
-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How do I plot/check the trained neural network. #51
Comments
If you want to just look at the inputs and outputs of the neural network block, you can use plot(res_sol, idxs=(sys.nn.input.u[1], sys.nn.output.u[1])) which should show something like which looks similar to the plot for You can also leverage the plotting interface for comparing the plot(res_sol, idxs=(sys.nn.input.u[1], sys.nn.output.u[1]))
plot!(res_sol, idxs=(sys.nn.input.u[1], friction(sys.nn.input.u[1]))) Bonus: now what if you want to evaluate the embedded neural network at arbitrary values? julia> arguments(arguments(equations(sys.nn)[1].rhs[1])[1])[1]
Chain(
layer_1 = Dense(1 => 10, mish, use_bias=false), # 10 parameters
layer_2 = Dense(10 => 10, mish, use_bias=false), # 100 parameters
layer_3 = Dense(10 => 1, use_bias=false), # 10 parameters
) # Total: 120 parameters,
# plus 0 states. and you can call this with whatever inputs are needed julia> LuxCore.stateless_apply(arguments(arguments(equations(sys.nn)[1].rhs[1])[1])[1], [1.23], convert(res_sol.ps[sys.nn.T], res_sol.ps[sys.nn.p]))
1-element Vector{Float64}:
18.04731050719053 but note that you also have to reconstruct the We should make this easier by storing the network as a callable parameter, I don't consider the above "official recommendations" 😅 , but I wanted to point out that it's possible. Since it relies on implementation details of the NN block, this is not public API in any form, use at your own risk. |
Thanks a lot! Yeah, something like this would be really useful. Especially when evaluating it, and there is a known function from which the data is fetched. Then I could check if this function is recovered (which is a lot more intuitive then just checking what it is evaluated for across the simulation. Just checking so I get it right. Say that I want to evaluate the friction NN at velocity = 1.5
friction = LuxCore.stateless_apply(arguments(arguments(equations(sys.nn)[1].rhs[1])[1])[1], [velocity], convert(res_sol.ps[sys.nn.T], res_sol.ps[sys.nn.p])) ? |
yes, if you have access to the neural network from the "outside" of the model it's much easier and that's the recommended thing to do for now, but I wanted to point out that it should be technically possible even if you just have the model. if you save the Lux model separately, you can just call it with |
Considering the example in https://sciml.github.io/ModelingToolkitNeuralNets.jl/dev/friction/, it is shown how to plot the friction over time, and compared that to the tru value.
However, we also started with friction as a function of velocity:
Would it be possible to evaluate the trained neural network (representing the friction) at various velocity values? Furthermore, could than the plot the real and trained frictions functions?
The text was updated successfully, but these errors were encountered: