-
Notifications
You must be signed in to change notification settings - Fork 6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Improved memory management. #5450
base: master
Are you sure you want to change the base?
Conversation
I've been testing this and it seems to work mostly fine. However, my prompt control nodes seem to have some problems with LoRA switching since Should I also have another problem where some model reference becomes None if I switch models, but I haven't figured out why and if that's also a bug in some of the nodes I use. I'll try to see if I can actually reproduce that problem with a simpler workflow. It's likely these are just bugs in how my custom nodes, but I thought I'd let you know anyway. |
Good catch.
The code of the patch_model function hasn't changed, how are you using it? |
I install a monkey patch that hijacks the callback during sampling to add and remove LoRA patches and then calls The whole thing is honestly a huge pile of hacks so it's entirely possible it worked merely by accident before and this change is just exposing some bugs. |
This will be merged in: #5583 so please go test that one. |
These changes makes the memory management less fragile (much less chances of custom nodes/extensions/future code changes breaking it) and should remove the noticeable delay when changing workflows with large models.
The reason I'm making a PR with these changes so people can test it and make sure there's no obvious bugs before I merge it.