You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I was trying to test the memory usage of adjoint, as claimed by authors of the original neural ODE paper, the memory usage of adjoint method should be smaller compared to vanilla "autograd". However, the output of torch.cuda.memory_summary() show an increase of GPU memory of the adjoint method compared to autograd. I'm wondering if I used torch.cuda.memory_summary() wrong, I printed it after the training. If my approach was incorrect, what is the correct way to get memory usage for "adjoint" and "autograd" method?
The text was updated successfully, but these errors were encountered:
Thanks for this amazing package!
I was trying to test the memory usage of adjoint, as claimed by authors of the original neural ODE paper, the memory usage of adjoint method should be smaller compared to vanilla "autograd". However, the output of
torch.cuda.memory_summary()
show an increase of GPU memory of the adjoint method compared to autograd. I'm wondering if I usedtorch.cuda.memory_summary()
wrong, I printed it after the training. If my approach was incorrect, what is the correct way to get memory usage for "adjoint" and "autograd" method?The text was updated successfully, but these errors were encountered: