Replies: 2 comments
-
|
Hi, I created an issue from this feature request. I will create a separate issue on ExecuTorch repo to add support for this model. Once it will be supported, we will be able to port this to React Native ExecuTorch :)) |
Beta Was this translation helpful? Give feedback.
0 replies
-
|
Ok, as I received information that this model can be exported by optimum executorch, see this: pytorch/executorch#14941, we need to make it possible to support such models in RNE. Now, I will escalate one task that needs to be done before we can port it, namely #609 since this model was probably exported with these kernels. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Google DeepMind juts dropped the small language model w/270M parameters which is very mobile/device friendly. Aware that Gemma 3 nano E1B also is availble but this model is still a big stretch for mobile apps. Kindly ask for some support on the RN bindings for Gemma 3 270M. This is going to be huge.
Has anyone started or looking into support for the model?
Beta Was this translation helpful? Give feedback.
All reactions