-
Notifications
You must be signed in to change notification settings - Fork 54
[WIP] Integration with DeepLabCut 3.0 - PyTorch Engine #121
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
This commit corrects the parameter names and order and removes unsupported arguments from `benchmark_videos` call: - device - precision - draw_keypoint_names - get_sys_info
- pyproject.toml is made PEP 517 / 518 / 621–compliant, which makes it easy to inherit dependencies - it is more tool-agnostic (no poetry required & facilitates uv packaging). - Python versions 3.10 up to 3.12 - tensorflow installation is restricted to python < 3.12 for windows
Co-authored-by: Copilot <[email protected]>
- Remove upper bound for tensorflow version is python version >=3.11 - add pip as required dependency
remove tkinter as required dependency.
(remove conditional tf dependency install for for windows and python >= 3.11, since never reached)
…liant *Update pyproject.toml and CI* - pyproject.toml is made PEP 517 / 518 / 621–compliant, which makes it easy to inherit dependencies - it is more tool-agnostic (no poetry required & facilitates uv packaging). - Python versions 3.10 up to 3.12 - Tensorflow installation is restricted to python < 3.11 for windows - CI is updated to test more Python versions and to use uv instead of poetry.
Added a more robust downloading function in `utils.py` that handles errors more gracefully. The download is skipped if the file already exists. In previous implementation, the benchmarking video was downloaded to the home directory, which in windows could result in permission issues when reading the file. This is now changed to the check_install directory. OpenCV was silently failing to read the video file, resulting in a frame count of zero. Now a ValueError is raised when the video cannot be read.
- removed poetry instructions - added optional instructions for installation with uv - added notes on tensorflow support for windows
deruyter92
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looked trough most of the changes and addressed the merge conflicts. I think it would be great if we could merge this WIP as soon as possible, so that we can cleanly contribute aditional PRs
What I have worked on:
- Merged with main (accepting combination of both, but mostly dlclive3 branch for updated code)
- Complete update of pyproject.toml
- more tool-agnostic (no poetry required)
- uv-friendly, so the user can choose their own preferred packaging method
- updated missing dependencies
- Python versions 3.10, 3.11, 3.12 (except for tensorflow users on windows)
- restrict windows users to python 3.10 for tensorflow
- Changed the CI accordingly to test for multiple python versions.
- CI uses uv now for installation.
- Added some basic testing
- Adressed copilot and other comments
@sneakers-the-rat, @MMathisLab, @C-Achard please let me know what you think and if you'd like me to incorporate additional changes.
|
@deruyter92 Anything I can focus on/help with ? |
|
I wonder if we could remove the package-level upper bound on python version and just throw an error if someone tries to use tensorflow with python > 3.12 (or, probably, just throw an error if it cant be imported, in case someone builds their own tensorflow for >3.12). I am assiming torch works with >3.12, and it would be nice to not have that limit for downstream use, future compatibility, etc. Edit: otherwise i still have same reservations re: maintainability duplicating the dlc code here, would prefer that code is split out in such a way that the main |
| print(f"Running dog model: {model_path}") | ||
| benchmark_videos( | ||
| model_path=model_path, | ||
| model_type="base" if Engine.from_model_path(model_path) == Engine.TENSORFLOW else "pytorch", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I believe this should continue to test tensorflow models, as long as the tensorflow models remain in the benchmarking data. Is that right? There is the possibility that the benchmarking data becomes updated and we silently stop testing tensorflow, so i think it would be nice to parameterize this test to explicitly run with pytorch and tensorflow, and fail test if no tensorflow models are found.
Co-authored-by: Jonny Saunders <[email protected]>
| from dlclive.engine import Engine | ||
|
|
||
| MODEL_NAME = "superanimal_quadruped" | ||
| SNAPSHOT_NAME = "snapshot-700000.pb" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It seems to me that this always assumes that tf is installed - should this be updated dynamically ?
has_tf = find_spec("tensorflow") is not None
has_torch = find_spec("torch") is not None
if has_tf:
model_type = "base"
MODEL_NAME = "superanimal_quadruped"
SNAPSHOT_NAME = "snapshot-700000.pb"
elif has_torch:
model_type = "pytorch"
MODEL_NAME = # "pt model"
SNAPSHOT_NAME = # "model.pth"
else:
raise RuntimeError("Neither TensorFlow nor PyTorch is installed. Please install one using `tf` or `pytorch` optional dependencies.")There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Seems like this should have thia exception be raised in one place in the package - if having neither should be considered an exception, it should get raised predictably
|
One thing I'd like to add before merging is improved modelzoo functionality where PyTorch models are exported in DeepLabCut-live format. This requires finding the corresponding model configurations & storing them inside the .pt file. |
|
Let's leave that for a follow up PR; this is already too large |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull request overview
Copilot reviewed 84 out of 89 changed files in this pull request and generated 17 comments.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| """ | ||
| import pytest | ||
| import numpy as np | ||
| from pathlib import Path |
Copilot
AI
Jan 15, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Import of 'Path' is not used.
This pull requests updates DeepLabCut-Live for models exported with DeepLabCut 3.0. TensorFlow models can still be used, and the code is siloed so that only the engine used to run the code is required as a package (i.e. no need to install TensorFlow if you want to run live pose estimation with PyTorch models).
If you want to give this PR a try, you can install the code in your local
condaenvironment by running:pip install "git+https://github.com/DeepLabCut/DeepLabCut-live.git@dlclive3"