-
Notifications
You must be signed in to change notification settings - Fork 14.2k
convert: rework ftype heuristics #18214
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
|
I was wondering if we should warn the user about using |
Signed-off-by: Aaron Teo <[email protected]> convert: fix type-check Signed-off-by: Aaron Teo <[email protected]> convert: bring back heuristics comment Signed-off-by: Aaron Teo <[email protected]>
bcc2001 to
eae4555
Compare
|
Is it really necessary to check all the tensors? Wasn't the issue just that it defaulted to The reason I'm saying is because this duplicates tensor loading logic that really needs to be refactored, see #18043 (review) Apart from MXFP4/FP8 models I can't remember seeing any safetensors with mixed datatypes... |
|
@CISC once I finish my refactoring of |
pwilkin
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The heuristics should work a bit differently - you should only take into account (a) at least 2D tensors (b) non-F32 tensors.
Edit: Ignore what I said. Will revert to using first tensor :) |
In retrospect, I think looping through all the tensors might not have been a good idea, especially if the model is large. In this case I'll revert back to using the first tensor and I guess, check to see if it's If the conditions don't meet, we'll just jump to the next tensor and check again. Let me know your thoughts about this :) |
Signed-off-by: Aaron Teo <[email protected]>
Signed-off-by: Aaron Teo <[email protected]>
|
Updated this PR and retained the original If there are no matches, it defaults to f16 ftype. Updated the PR description also, PTAL again. |
CISC
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Might want to enumerate and have some threshold before giving up, but optional.
| logger.info(f"choosing --outtype f16 from first tensor type ({first_tensor.dtype})") | ||
| self.ftype = gguf.LlamaFileType.MOSTLY_F16 | ||
| for _, tensor in self.get_tensors(): | ||
| if tensor.dim() < 2 and tensor.dtype == torch.float32: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| if tensor.dim() < 2 and tensor.dtype == torch.float32: | |
| if tensor.dim() < 2: |
fixes: #18182
This PR updates the heuristic detection logic for the default ftype. When
--outtypeis not specified, the heuristics will attempt to figure out the highest-fidelity 16-bit ftype based on the first tensor.If the first tensor does not match the following, it will continue to the second and nth tensor until it finds a tensor that matches:
If all tensors do not match the criteria above, it will default to
f16ftype.Tested against the following models:
Note about alternative methods, such as relying on
config.jsondtype: some finetunes or quantisation lie about the correct dtype, so it can't be trusted. And some models like GPT-OSS do not actually contain adtypekey within their config.json, so the easier route was to do heuristics.AI Declaration: AI was used when creating this PR to identify existing logic relating to heuristics and to scaffold the code.