Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Memory usage increases with multiple realizations #127

Closed
dimitri-mar opened this issue Jul 31, 2023 · 3 comments
Closed

Memory usage increases with multiple realizations #127

dimitri-mar opened this issue Jul 31, 2023 · 3 comments

Comments

@dimitri-mar
Copy link

I would like to compute some statistical quantities from multiple realizations of a dynamical system. I reproduced the issue that I have using one of the examples, kuramoto_oscillators.jl.
For example, I simply introduced a loop that repeats either the sampling of the initial conditions + solves (commented below) or the entire construction of the dynamical system. I would have expected that the variables were overwritten after each repetition, not consuming further memory. But at each repetition, it keeps increasing, eventually saturating all the available memory. I am relatively new to Julia, so I apologize if I am not seeing something. I attempted to understand the leak with the profiler Pprof without success. I also tried to set all the variables = nothing, but it did not improve the issue. I seem to be making a mistake, but I'm not sure what it is. (Julia versions 1.9.0 and 1.9.2)

using NetworkDynamics
using Graphs
using OrdinaryDiffEq
using Plots

### Defining the graph

N = 50 # nodes
k = 5 # node degree
g = barabasi_albert(N, k) # graph

### Network dynamics vertex and edge functions

@inline function kuramoto_vertex!(dv, v, edges, p, t)
    dv .= p
    sum_coupling!(dv, edges)
    nothing
end

@inline function kuramoto_edge!(e, v_s, v_d, p, t)
    e .= p * sin.(v_s .- v_d)
    nothing
end

n_repetitions = 1000
### Constructing the network dynamics
for rep in 1:n_repetitions
    println(rep)
    odevertex = ODEVertex(; f=kuramoto_vertex!, dim=1)
    staticedge = StaticEdge(; f=kuramoto_edge!, dim=1)

    # generating random values for the parameter value ω_0 of the vertices
    v_pars = [1.0 * randn() for v in vertices(g)]
    # coupling stength of edges are set to 1/3
    e_pars = [1.0 / 3.0 for e in edges(g)]

    parameters = (v_pars, e_pars)

    # setting up the  network_dynamics
    kuramoto_network! = network_dynamics(odevertex, staticedge, g)

### Simulation and Plotting
#for rep in 1:n_repetitions
    #println(rep)
    # constructing random initial conditions for nodes (variable θ)
    x0 = randn(nv(g)) # nv(g) - number of vertices in g
    dx = similar(x0)

    prob = ODEProblem(kuramoto_network!, x0, (0.0, 2000), parameters)
    sol = solve(prob, Tsit5(); reltol=1e-6)
end
@hexaeder
Copy link
Member

hexaeder commented Aug 1, 2023

Yeah, sometimes this problem comes up with Julia and it can be a bit annoying to fix. I think there are three things you could try:

  • put a GC.gc() somewhere in the loop, this will trigger garbage collections and sometimes helps.
  • put your loop body into a separate function, give g as a parameter, and return only the part of the solution you're interested in (i.e., the statistical observables rather than the full time series). This keeps the definition of sol truly local (in your example it's still a global var), in my experience, that works better with garbage collection
  • in case you're only interested in the final state rather than the trajectories, you could also play around with the solver option save_everystep to keep the sol object small in the first place. Similarly, if your system goes to a steady-state, check out the TerminateSteadyState callback from DiffEqCallbacks.jl to prevent unnecessary timesteps at the end.

@dimitri-mar
Copy link
Author

Thank you very much @hexaeder . I was almost renouncing using Julia for any simulation!
I tried the second option in several different ways, but it did not solve the issue. However, your suggestion of using GC.gc() at each repetition does maintain the memory usage constant in time! I put it right after the for-loop beginning.
I looked into the documentation, and I could not find any guide explaining when you are supposed to request the garbage collection manually. It feels a bit like a workaround. Is it?
It is a bit outside the scope of this issue, but would you point me to any documentation or tutorial that suggests the use of GC.gc().
Regarding your last point, thank you very much. I will look into those suggestions. They seem very useful.

@hexaeder
Copy link
Member

hexaeder commented Aug 1, 2023

To be honest, not really. If you search for GC.gc() in the Julia discourse there are many hits where people suggested it, which is why I know about it. Maybe you could post your specific example there and ask if somebody can explain the behaviour. You're right that manually triggering GC should not really be necessary.

Sometimes, especially on HPC systems, the maximum available memory is much higher than what your process is allowed to use (if you're on a compute cluster with 500 gigs of memory but only allocated 100 for you job...). For those scenarios, there is a new --heap-size-hint command line option in Julia 1.9 to make GC more agressive.

I'll close the issue here since it is not really ND.jl related.

@hexaeder hexaeder closed this as completed Aug 1, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants