Skip to content

Conversation

@pctablet505
Copy link
Collaborator

This pull request updates the call method in parseq_decoder.py to improve compatibility with TensorFlow Lite and graph mode. The main change is replacing a standard Python conditional with ops.cond, ensuring the code works correctly in both eager and graph execution environments.

TensorFlow Lite/graph mode compatibility:

  • Replaced the Python if/else logic for building the content tensor with an ops.cond call, enabling proper conditional execution in graph mode. This change moves the content construction logic into a function and uses ops.cond to select between two execution paths based on the value of tokens_length.

Use ops.cond() instead of Python if-statement in PARSeqDecoder.call()
to ensure graph mode compatibility during TFLite conversion.

- Changed 'if tokens_length > 1:' to ops.cond(tokens_length > 1, ...)
- This allows TensorFlow to properly trace the graph with symbolic tensors

This fixes the 'Using a symbolic tf.Tensor as a Python bool is not allowed'
error during TFLite conversion when sequence length is dynamic (None).
@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @pctablet505, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request focuses on enhancing the robustness and compatibility of the PARSeq decoder, specifically for TensorFlow Lite and general graph-mode execution. By refactoring a key conditional block in the call method from a Python if/else to ops.cond, the changes ensure that the model's behavior remains consistent and correct when compiled into a static graph, which is crucial for deployment on various platforms and for performance optimizations.

Highlights

  • TensorFlow Lite Compatibility: The primary goal of this PR is to improve the PARSeq decoder's compatibility with TensorFlow Lite and graph execution modes.
  • Conditional Logic Refactor: A standard Python if/else statement within the call method of parseq_decoder.py has been replaced with ops.cond to ensure proper conditional execution in graph mode.
  • Content Tensor Construction: The logic for building the content tensor, which previously used a Python conditional, now leverages ops.cond to dynamically select between two execution paths based on tokens_length.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request correctly refactors the call method in parseq_decoder.py to use ops.cond instead of a standard Python if statement. This is a crucial change for ensuring compatibility with TensorFlow Lite and graph mode execution, where control flow must be handled within the computation graph. The implementation is clean and correct, defining a separate function for the conditional logic, which improves readability. This change effectively addresses the issue and adheres to best practices for writing backend-agnostic and graph-compatible Keras code.

@pctablet505
Copy link
Collaborator Author

@gemini-code-assist review

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request updates the PARSeqDecoder to be compatible with TensorFlow Lite and graph mode by replacing a Python conditional with a branchless tensor-based implementation. The approach is generally correct and improves backend compatibility. However, I've identified an edge case in the new implementation where an input with a sequence length of 0 would cause incorrect slicing and likely lead to a runtime error. My review includes a specific code suggestion to fix this issue.

@laxmareddyp laxmareddyp removed the request for review from sineeli December 8, 2025 16:27
@pctablet505 pctablet505 requested review from sachinprasadhs and removed request for sachinprasadhs December 9, 2025 10:44
Simplifies content and query embedding construction for better compatibility with JAX/TF graph backends. Removes dynamic slicing and Python conditionals, using ops.take and shape-based indexing to ensure consistent tensor shapes.
@sachinprasadhs sachinprasadhs added the kokoro:force-run Runs Tests on GPU label Dec 10, 2025
@kokoro-team kokoro-team removed the kokoro:force-run Runs Tests on GPU label Dec 10, 2025
@sachinprasadhs sachinprasadhs merged commit 5bac50f into keras-team:master Dec 10, 2025
11 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants