-
Notifications
You must be signed in to change notification settings - Fork 324
dora implementation in keras #2500
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
Summary of ChangesHello @Ajinkya-25, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request introduces a significant feature to the Keras Hub library by implementing DoRA (Weight-Decomposed Low-Rank Adaptation) for dense and embedding layers. DoRA offers a parameter-efficient approach to fine-tuning large models by decoupling the magnitude and direction of weight updates. The new Highlights
🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request introduces DoRA (Weight-Decomposed Low-Rank Adaptation) layers (DoRADense, DoRAEmbedding, DoRAPositionEmbedding) and integrates them into the BertBackbone. The changes are extensive and well-tested, adding a significant new capability for parameter-efficient fine-tuning.
My review focuses on adherence to the repository's style guide, particularly regarding docstrings and type hints, and on the correctness of the new layer implementations.
Key feedback points:
- The PR title 'docstring changes in dora files' is not representative of the significant feature addition. It would be better to have a more descriptive title.
- There are several violations of the style guide regarding type hints in function signatures and docstring formatting.
- The
DoRAEmbeddinglayer has an unimplementedsparseargument and an incorrect docstring for itscallmethod.
Overall, this is a great contribution. Addressing the feedback will improve consistency and correctness.
|
/gemini review |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request introduces DoRA (Weight-Decomposed Low-Rank Adaptation) functionality to KerasHub, specifically implementing DoRADense and DoRAEmbedding layers, along with DoRAPositionEmbedding and utility functions to convert standard Keras Dense and Embedding layers to their DoRA counterparts. New files dora_dense.py and dora_embeddings.py define these layers, which decompose weights into frozen pretrained components, learnable low-rank matrices, and a learnable magnitude vector for parameter-efficient fine-tuning. Comprehensive unit tests for these new layers are added in dora_dense_test.py and dora_embeddings_test.py, covering creation, validation, forward pass, weight initialization, parameter counting, serialization, and mathematical properties, with a focus on backend compatibility. The BERT backbone model (bert_backbone.py) is updated to optionally use these DoRA layers for its token embedding, position embedding, and pooled dense layers, controlled by new enable_dora, dora_rank, and dora_alpha parameters. Corresponding tests in bert_backbone_test.py verify the DoRA-enabled BERT backbone's functionality, configuration preservation, and output shape consistency with the regular model, including saving and loading. Review comments highlighted the need to ensure numerical stability by clipping column_norms to a small minimum value during magnitude vector initialization in dora_embeddings.py and to add a crucial test for verifying the correct initialization of the magnitude vector to column-wise L2 norms of pretrained embeddings in dora_embeddings_test.py.
|
/gemini review |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request introduces DoRA (Weight-Decomposed Low-Rank Adaptation) functionality to KerasHub, specifically implementing DoRADense, DoRAEmbedding, and DoRAPositionEmbedding layers. These layers enable parameter-efficient fine-tuning by decomposing weights into frozen base weights, trainable low-rank matrices (A and B), and a trainable magnitude vector. Utility functions convert_dense_to_dora and convert_embedding_to_dora are also added to facilitate converting standard Keras layers to their DoRA counterparts. The BERT backbone model is updated to optionally use these new DoRA layers for its token embeddings, position embeddings, and pooled dense layer, controlled by enable_dora, dora_rank, and dora_alpha parameters. Comprehensive unit tests for all new DoRA layers and their integration into the BERT backbone are included, covering functionality, parameter validation, weight initialization, serialization, and mathematical properties. Review comments suggest correcting docstring typos in the DoRA formulas (W_0 + BA to W_0 + AB), ensuring consistent scaling application across DoRA layers (applying scaling after matrix multiplication), and exposing public get_effective_position_embeddings() and merge_weights() methods for DoRAPositionEmbedding for consistency and utility.
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
|
It seems gemini is giving suggestions which is of no use (expected from any ai model which always tries to find something) docstrings are consistent and code seems correct after checing externelly with ai, consider it for merging. |
|
I think this would be best added in core Keras repo - https://github.com/keras-team/keras |
Summary
This PR adds DoRA (Weight-Decomposed Low-Rank Adaptation) support to KerasHub.
Changes
DoRADenseandDoRAEmbeddinglayersNotes
api-gen) has been run and committed