Skip to content

Conversation

@Ajinkya-25
Copy link

@Ajinkya-25 Ajinkya-25 commented Dec 29, 2025

Summary

This PR adds DoRA (Weight-Decomposed Low-Rank Adaptation) support to KerasHub.

Changes

  • Introduced DoRADense and DoRAEmbedding layers
  • Added conversion utilities from existing Dense layers
  • Integrated DoRA into BERT backbone
  • Exposed new layers via public API
  • Added comprehensive unit tests covering:
    • Forward pass
    • Mathematical properties
    • Serialization
    • Backend compatibility

Notes

  • All tests pass locally
  • API generation hook (api-gen) has been run and committed

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @Ajinkya-25, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request introduces a significant feature to the Keras Hub library by implementing DoRA (Weight-Decomposed Low-Rank Adaptation) for dense and embedding layers. DoRA offers a parameter-efficient approach to fine-tuning large models by decoupling the magnitude and direction of weight updates. The new DoRADense, DoRAEmbedding, and DoRAPositionEmbedding layers, along with conversion utilities, provide flexible options for model adaptation. Furthermore, these DoRA capabilities are integrated into the BertBackbone model, allowing users to leverage efficient fine-tuning strategies directly within BERT architectures. This enhancement aims to reduce computational overhead and memory footprint during model training without compromising performance.

Highlights

  • New DoRA Layers: Introduced DoRADense, DoRAEmbedding, and DoRAPositionEmbedding layers, which implement Weight-Decomposed Low-Rank Adaptation (DoRA) for parameter-efficient fine-tuning of dense and embedding layers.
  • BERT Backbone Integration: The BertBackbone model has been updated to optionally use these new DoRA layers for its token embeddings, position embeddings, and pooled dense layer, controlled by new enable_dora, dora_rank, and dora_alpha parameters.
  • Conversion Utilities: Added utility functions convert_dense_to_dora and convert_embedding_to_dora to facilitate easy transformation of standard Keras Dense and Embedding layers into their DoRA-enabled counterparts.
  • Comprehensive Testing: Extensive unit tests have been added for all new DoRA layers and their integration into the BertBackbone, covering functionality, parameter validation, mathematical properties, and backend compatibility.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@Ajinkya-25 Ajinkya-25 changed the title docstring changes in dora files dora implementation in keras Dec 29, 2025
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces DoRA (Weight-Decomposed Low-Rank Adaptation) layers (DoRADense, DoRAEmbedding, DoRAPositionEmbedding) and integrates them into the BertBackbone. The changes are extensive and well-tested, adding a significant new capability for parameter-efficient fine-tuning.

My review focuses on adherence to the repository's style guide, particularly regarding docstrings and type hints, and on the correctness of the new layer implementations.

Key feedback points:

  • The PR title 'docstring changes in dora files' is not representative of the significant feature addition. It would be better to have a more descriptive title.
  • There are several violations of the style guide regarding type hints in function signatures and docstring formatting.
  • The DoRAEmbedding layer has an unimplemented sparse argument and an incorrect docstring for its call method.

Overall, this is a great contribution. Addressing the feedback will improve consistency and correctness.

@Ajinkya-25
Copy link
Author

/gemini review

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces DoRA (Weight-Decomposed Low-Rank Adaptation) functionality to KerasHub, specifically implementing DoRADense and DoRAEmbedding layers, along with DoRAPositionEmbedding and utility functions to convert standard Keras Dense and Embedding layers to their DoRA counterparts. New files dora_dense.py and dora_embeddings.py define these layers, which decompose weights into frozen pretrained components, learnable low-rank matrices, and a learnable magnitude vector for parameter-efficient fine-tuning. Comprehensive unit tests for these new layers are added in dora_dense_test.py and dora_embeddings_test.py, covering creation, validation, forward pass, weight initialization, parameter counting, serialization, and mathematical properties, with a focus on backend compatibility. The BERT backbone model (bert_backbone.py) is updated to optionally use these DoRA layers for its token embedding, position embedding, and pooled dense layers, controlled by new enable_dora, dora_rank, and dora_alpha parameters. Corresponding tests in bert_backbone_test.py verify the DoRA-enabled BERT backbone's functionality, configuration preservation, and output shape consistency with the regular model, including saving and loading. Review comments highlighted the need to ensure numerical stability by clipping column_norms to a small minimum value during magnitude vector initialization in dora_embeddings.py and to add a crucial test for verifying the correct initialization of the magnitude vector to column-wise L2 norms of pretrained embeddings in dora_embeddings_test.py.

@Ajinkya-25
Copy link
Author

/gemini review

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces DoRA (Weight-Decomposed Low-Rank Adaptation) functionality to KerasHub, specifically implementing DoRADense, DoRAEmbedding, and DoRAPositionEmbedding layers. These layers enable parameter-efficient fine-tuning by decomposing weights into frozen base weights, trainable low-rank matrices (A and B), and a trainable magnitude vector. Utility functions convert_dense_to_dora and convert_embedding_to_dora are also added to facilitate converting standard Keras layers to their DoRA counterparts. The BERT backbone model is updated to optionally use these new DoRA layers for its token embeddings, position embeddings, and pooled dense layer, controlled by enable_dora, dora_rank, and dora_alpha parameters. Comprehensive unit tests for all new DoRA layers and their integration into the BERT backbone are included, covering functionality, parameter validation, weight initialization, serialization, and mathematical properties. Review comments suggest correcting docstring typos in the DoRA formulas (W_0 + BA to W_0 + AB), ensuring consistent scaling application across DoRA layers (applying scaling after matrix multiplication), and exposing public get_effective_position_embeddings() and merge_weights() methods for DoRAPositionEmbedding for consistency and utility.

Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
@Ajinkya-25
Copy link
Author

It seems gemini is giving suggestions which is of no use (expected from any ai model which always tries to find something) docstrings are consistent and code seems correct after checing externelly with ai, consider it for merging.

@divyashreepathihalli
Copy link
Collaborator

I think this would be best added in core Keras repo - https://github.com/keras-team/keras
The LoRA implementation is added in Keras.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants