Skip to content

[OpenVINO backend] support export model from the supported backends to openvino format #21486

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 10 commits into from
Jul 24, 2025

Conversation

Mohamed-Ashraf273
Copy link
Contributor

@Mohamed-Ashraf273 Mohamed-Ashraf273 commented Jul 16, 2025

@rkazants
@fchollet
Implement a logic in Keras that allows to serialize a model loaded in backends (tensorflow, jax, torch and openvino) into IR on a disk

import os
os.environ["CUDA_VISIBLE_DEVICES"] = "-1"
os.environ["KERAS_BACKEND"] = "backend" # -> backend required
import numpy as np
from keras.src import layers, models
import openvino as ov

# Build a simple sequential model
model = models.Sequential([
    layers.Dense(4, activation="relu", input_shape=(5,)),
    layers.Dense(2, activation="sigmoid"),
])
# Prepare input
x = np.random.normal(size=(2, 5)).astype("float32")
# Get reference output
ref_output = model(x)
# Export model
temp_filepath = "temp_exported_model.xml"
model.export(temp_filepath, formt="openvino")

core = ov.Core()
ov_model = core.read_model(temp_filepath)
compiled_model = core.compile_model(ov_model, "CPU")
x = np.random.normal(size=(1, 5)).astype("float32")
ov_output = compiled_model([x])[0]
print(ov_output)

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Summary of Changes

Hello @Mohamed-Ashraf273, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request introduces the capability to export Keras models to the OpenVINO Intermediate Representation (IR) format. This new functionality allows users to serialize Keras models, regardless of the backend they were loaded with, into a format optimized for OpenVINO inference.

Highlights

  • New Export Format: Introduced 'openvino' as a new supported format for the model.export() method, allowing Keras models to be serialized into OpenVINO Intermediate Representation (IR).
  • OpenVINO Export Logic: Implemented a dedicated export_openvino function that handles the conversion of Keras models to OpenVINO IR. This function intelligently adapts its conversion strategy based on the active Keras backend (e.g., direct conversion for TensorFlow, or via a temporary tf_saved_model for JAX/Torch).
  • Backend Compatibility: The new OpenVINO export functionality supports models originating from OpenVINO, TensorFlow, JAX, and Torch backends, providing a unified way to generate OpenVINO IR from Keras models.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in issue comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist is currently in preview and may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments to provide feedback.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces the capability to export Keras models to the OpenVINO format. I've identified a few issues in the export_openvino function related to the OpenVINO backend export logic and file path handling for the JAX and Torch backends. Addressing these points will help ensure the new feature is robust and works as expected across all supported backends.

@codecov-commenter
Copy link

codecov-commenter commented Jul 16, 2025

Codecov Report

Attention: Patch coverage is 81.72043% with 17 lines in your changes missing coverage. Please review.

Project coverage is 82.90%. Comparing base (d55a767) to head (0166bd6).
Report is 12 commits behind head on master.

Files with missing lines Patch % Lines
keras/src/export/openvino.py 84.88% 4 Missing and 9 partials ⚠️
keras/src/backend/openvino/core.py 0.00% 1 Missing and 1 partial ⚠️
keras/src/models/model.py 50.00% 2 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##           master   #21486      +/-   ##
==========================================
+ Coverage   82.81%   82.90%   +0.09%     
==========================================
  Files         565      566       +1     
  Lines       55520    55892     +372     
  Branches     8664     8724      +60     
==========================================
+ Hits        45977    46337     +360     
  Misses       7428     7428              
- Partials     2115     2127      +12     
Flag Coverage Δ
keras 82.70% <77.41%> (+0.08%) ⬆️
keras-jax 64.02% <37.63%> (+0.63%) ⬆️
keras-numpy 58.51% <13.97%> (-0.08%) ⬇️
keras-openvino 34.62% <52.68%> (+0.63%) ⬆️
keras-tensorflow 64.44% <33.33%> (+0.60%) ⬆️
keras-torch 64.09% <35.48%> (+0.58%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@Mohamed-Ashraf273 Mohamed-Ashraf273 force-pushed the support_export branch 2 times, most recently from 95e8782 to c0614a3 Compare July 16, 2025 12:26
@Mohamed-Ashraf273
Copy link
Contributor Author

/gemini review

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces the export_openvino function, enabling Keras models to be exported to the OpenVINO format. The implementation supports direct export when using the OpenVINO backend, and conversion from other backends like TensorFlow, JAX, and Torch. The review focuses on improving code clarity and robustness.

@Mohamed-Ashraf273 Mohamed-Ashraf273 marked this pull request as ready for review July 16, 2025 16:53
@Mohamed-Ashraf273 Mohamed-Ashraf273 force-pushed the support_export branch 7 times, most recently from 8fa393e to 107254f Compare July 16, 2025 21:18
@Mohamed-Ashraf273
Copy link
Contributor Author

/gemini review

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a valuable new feature: the ability to export Keras models to the OpenVINO format, enabling optimized inference on Intel hardware. I've identified a high-severity issue in how dynamic shapes are handled, and a minor redundancy in the export logic and a duplicated line in the new tests. Addressing these points will enhance the correctness and quality of this new feature.

Comment on lines 229 to 230
larger_input_y = np.concatenate([ref_input_y, ref_input_y], axis=0)
compiled_model([larger_input_x, larger_input_y])
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

These two lines appear to be a copy-paste error, as they duplicate the logic from the preceding lines (227-228). The redundant creation of larger_input_y and the second call to compiled_model can be removed to clean up the test.

Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
@Mohamed-Ashraf273 Mohamed-Ashraf273 changed the title [OpenVINO backend] suppor export model using openvino format [OpenVINO backend] support export model using openvino format Jul 16, 2025
Comment on lines 92 to 102
elif backend.backend() == "tensorflow":
import tempfile

with tempfile.TemporaryDirectory() as temp_dir:
model.export(temp_dir, format="tf_saved_model")
ov_model = ov.convert_model(temp_dir)
else:
raise NotImplementedError(
"`export_openvino` is only compatible with OpenVINO and "
"TensorFlow backends."
)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

not sure, why do we need it here?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If the model is loaded in tf backend and the user needs to convert it to OpenVINO IR.

Copy link
Contributor

@rkazants rkazants Jul 17, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ov.convert_model supports conversion of TF object from memory as well. Will it work?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No, it doesn't work in our case because the model has dynamic input shapes and no example_input is passed. This causes ov.convert_model to fail with a ValueError related to unknown TensorShape. Exporting to a SavedModel first resolves this by providing concrete input signatures.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If the example_input is the only blocker, we can use convert_spec_to_tensor to create a dummy input.

Ref: https://github.com/keras-team/keras/blob/master/keras/src/export/onnx.py#L91-L95

Copy link
Contributor Author

@Mohamed-Ashraf273 Mohamed-Ashraf273 Jul 18, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You're right that convert_spec_to_tensor() can provide a dummy input which already is being used via OpenVINO backend, but the issue here isn’t just about having any input, it’s about the shape of that input.
In our case, the model was traced with a specific shape, and later we ran inference with a different shape, which caused a mismatch error in OpenVINO.
It might be solvable in more complex ways, or like how onnx did in TensorFlow part, but for OpenVINO, I think exporting to tf_saved_model first is the safest and easiest method.

Copy link
Contributor

@rkazants rkazants left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@fchollet, looks good to me. Recommend to merge.

@Mohamed-Ashraf273
Copy link
Contributor Author

Mohamed-Ashraf273 commented Jul 22, 2025

@rkazants
@fchollet
I've supported all backends, including torch and jax. Also I avoided loading the model on disk in case of tf, and now it is ready for review.

@Mohamed-Ashraf273
Copy link
Contributor Author

/gemini review

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces support for exporting Keras models to the OpenVINO format, which is a great addition for inference on Intel hardware. The implementation handles conversions from TensorFlow, JAX, Torch, and the native OpenVINO backends. The code includes tests for various model architectures and input structures.

@Mohamed-Ashraf273 Mohamed-Ashraf273 changed the title [OpenVINO backend] support export model using openvino format [OpenVINO backend] support export model from the supported backends to openvino format Jul 22, 2025
model inputs. If not provided, it will be inferred.
**kwargs: Additional keyword arguments (currently unused).
Example:
import keras
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please use a code block (and no starting indent) for the code example

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done!

input_signature: Optional. Specifies the shape and dtype of the
model inputs. If not provided, it will be inferred.
**kwargs: Additional keyword arguments (currently unused).
Example:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Add line break above

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done!

Copy link
Collaborator

@fchollet fchollet left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good, thank you!

@google-ml-butler google-ml-butler bot added kokoro:force-run ready to pull Ready to be merged into the codebase labels Jul 24, 2025
@fchollet fchollet merged commit 77883ff into keras-team:master Jul 24, 2025
7 checks passed
@Mohamed-Ashraf273 Mohamed-Ashraf273 deleted the support_export branch July 24, 2025 22:06
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kokoro:force-run ready to pull Ready to be merged into the codebase size:M
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants