Replies: 2 comments
-
|
I was able to get it working on google\gemma-4-26B-A4B-it |
Beta Was this translation helpful? Give feedback.
0 replies
-
|
Seems like its been working the past few models. Uploading them here for anyone interested: |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
One question, as I have tried on multiple models now, How do I get the oQ+ Quantization working?
2026-04-16 16:45:15,305 - omlx.oq - INFO - [-] - oQ8: layer sensitivity (descending): L53=0.0002, L59=0.0001, L58=0.0001, L41=0.0001, L57=0.0001, L47=0.0000, L52=0.0000, L1=0.0000, L29=0.0000, L40=0.0000, L56=0.0000, L55=0.0000, L35=0.0000, L0=0.0000, L39=0.0000, L54=0.0000, L49=0.0000, L42=0.0000, L51=0.0000, L43=0.0000, L48=0.0000, L28=0.0000, L38=0.0000, L26=0.0000, L44=0.0000, L34=0.0000, L46=0.0000, L30=0.0000, L27=0.0000, L31=0.0000, L37=0.0000, L45=0.0000, L36=0.0000, L32=0.0000, L50=0.0000, L23=0.0000, L33=0.0000, L25=0.0000, L12=0.0000, L24=0.0000, L17=0.0000, L5=0.0000, L8=0.0000, L11=0.0000, L22=0.0000, L15=0.0000, L7=0.0000, L6=0.0000, L20=0.0000, L9=0.0000, L3=0.0000, L19=0.0000, L21=0.0000, L10=0.0000, L4=0.0000, L18=0.0000, L13=0.0000, L2=0.0000, L16=0.0000, L14=0.0000
Beta Was this translation helpful? Give feedback.
All reactions