-
Notifications
You must be signed in to change notification settings - Fork 19.6k
[OpenVINO backend] support export model from the supported backends to openvino format #21486
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[OpenVINO backend] support export model from the supported backends to openvino format #21486
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Summary of Changes
Hello @Mohamed-Ashraf273, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!
This pull request introduces the capability to export Keras models to the OpenVINO Intermediate Representation (IR) format. This new functionality allows users to serialize Keras models, regardless of the backend they were loaded with, into a format optimized for OpenVINO inference.
Highlights
- New Export Format: Introduced 'openvino' as a new supported format for the
model.export()
method, allowing Keras models to be serialized into OpenVINO Intermediate Representation (IR). - OpenVINO Export Logic: Implemented a dedicated
export_openvino
function that handles the conversion of Keras models to OpenVINO IR. This function intelligently adapts its conversion strategy based on the active Keras backend (e.g., direct conversion for TensorFlow, or via a temporarytf_saved_model
for JAX/Torch). - Backend Compatibility: The new OpenVINO export functionality supports models originating from OpenVINO, TensorFlow, JAX, and Torch backends, providing a unified way to generate OpenVINO IR from Keras models.
Using Gemini Code Assist
The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.
Invoking Gemini
You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command>
or @gemini-code-assist <command>
. Below is a summary of the supported commands.
Feature | Command | Description |
---|---|---|
Code Review | /gemini review |
Performs a code review for the current pull request in its current state. |
Pull Request Summary | /gemini summary |
Provides a summary of the current pull request in its current state. |
Comment | @gemini-code-assist | Responds in comments when explicitly tagged, both in issue comments and review comments. |
Help | /gemini help |
Displays a list of available commands. |
Customization
To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/
folder in the base of the repository. Detailed instructions can be found here.
Limitations & Feedback
Gemini Code Assist is currently in preview and may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments to provide feedback.
You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.
Footnotes
-
Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution. ↩
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request introduces the capability to export Keras models to the OpenVINO format. I've identified a few issues in the export_openvino
function related to the OpenVINO backend export logic and file path handling for the JAX and Torch backends. Addressing these points will help ensure the new feature is robust and works as expected across all supported backends.
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## master #21486 +/- ##
==========================================
+ Coverage 82.81% 82.90% +0.09%
==========================================
Files 565 566 +1
Lines 55520 55892 +372
Branches 8664 8724 +60
==========================================
+ Hits 45977 46337 +360
Misses 7428 7428
- Partials 2115 2127 +12
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
95e8782
to
c0614a3
Compare
c0614a3
to
850d003
Compare
/gemini review |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request introduces the export_openvino
function, enabling Keras models to be exported to the OpenVINO format. The implementation supports direct export when using the OpenVINO backend, and conversion from other backends like TensorFlow, JAX, and Torch. The review focuses on improving code clarity and robustness.
6927a53
to
b6d1663
Compare
8fa393e
to
107254f
Compare
107254f
to
4e93a74
Compare
/gemini review |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request introduces a valuable new feature: the ability to export Keras models to the OpenVINO format, enabling optimized inference on Intel hardware. I've identified a high-severity issue in how dynamic shapes are handled, and a minor redundancy in the export logic and a duplicated line in the new tests. Addressing these points will enhance the correctness and quality of this new feature.
keras/src/export/openvino_test.py
Outdated
larger_input_y = np.concatenate([ref_input_y, ref_input_y], axis=0) | ||
compiled_model([larger_input_x, larger_input_y]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
keras/src/export/openvino.py
Outdated
elif backend.backend() == "tensorflow": | ||
import tempfile | ||
|
||
with tempfile.TemporaryDirectory() as temp_dir: | ||
model.export(temp_dir, format="tf_saved_model") | ||
ov_model = ov.convert_model(temp_dir) | ||
else: | ||
raise NotImplementedError( | ||
"`export_openvino` is only compatible with OpenVINO and " | ||
"TensorFlow backends." | ||
) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
not sure, why do we need it here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If the model is loaded in tf backend and the user needs to convert it to OpenVINO IR.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ov.convert_model
supports conversion of TF object from memory as well. Will it work?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No, it doesn't work in our case because the model has dynamic input shapes and no example_input
is passed. This causes ov.convert_model
to fail with a ValueError
related to unknown TensorShape
. Exporting to a SavedModel
first resolves this by providing concrete input signatures.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If the example_input
is the only blocker, we can use convert_spec_to_tensor
to create a dummy input.
Ref: https://github.com/keras-team/keras/blob/master/keras/src/export/onnx.py#L91-L95
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You're right that convert_spec_to_tensor()
can provide a dummy input which already is being used via OpenVINO
backend, but the issue here isn’t just about having any input, it’s about the shape of that input.
In our case, the model was traced with a specific shape, and later we ran inference with a different shape, which caused a mismatch error in OpenVINO
.
It might be solvable in more complex ways, or like how onnx
did in TensorFlow
part, but for OpenVINO
, I think exporting to tf_saved_model
first is the safest and easiest method.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@fchollet, looks good to me. Recommend to merge.
7329129
to
8af753a
Compare
/gemini review |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request introduces support for exporting Keras models to the OpenVINO format, which is a great addition for inference on Intel hardware. The implementation handles conversions from TensorFlow, JAX, Torch, and the native OpenVINO backends. The code includes tests for various model architectures and input structures.
keras/src/export/openvino.py
Outdated
model inputs. If not provided, it will be inferred. | ||
**kwargs: Additional keyword arguments (currently unused). | ||
Example: | ||
import keras |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please use a code block (and no starting indent) for the code example
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done!
input_signature: Optional. Specifies the shape and dtype of the | ||
model inputs. If not provided, it will be inferred. | ||
**kwargs: Additional keyword arguments (currently unused). | ||
Example: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add line break above
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good, thank you!
@rkazants
@fchollet
Implement a logic in Keras that allows to serialize a model loaded in backends (tensorflow, jax, torch and openvino) into IR on a disk