-
Notifications
You must be signed in to change notification settings - Fork 161
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix TGI
(Text Generation Inference) Endpoint Inference and TGI JSON Grammar Generation
#502
base: main
Are you sure you want to change the base?
Changes from 2 commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -101,7 +101,7 @@ def __init__(self, config: TGIModelConfig) -> None: | |
|
||
model_name = str(self.model_info["model_id"]) | ||
model_sha = self.model_info["model_sha"] | ||
model_precision = self.model_info["model_dtype"] | ||
model_precision = self.model_info.get("model_dtype") | ||
self.model_info = ModelInfo( | ||
model_name=model_name, | ||
model_sha=model_sha, | ||
|
@@ -127,7 +127,24 @@ def _async_process_request( | |
grammar=grammar, | ||
) | ||
|
||
generated_text = self.client.generate(prompt=context, generation_config=generation_config) | ||
generated_text = self.client.generate( | ||
prompt=context, | ||
do_sample=generation_config.do_sample or False, | ||
max_new_tokens=generation_config.max_new_tokens, | ||
best_of=generation_config.best_of, | ||
repetition_penalty=generation_config.repetition_penalty, | ||
return_full_text=generation_config.return_full_text or False, | ||
seed=generation_config.seed, | ||
stop_sequences=generation_config.stop, | ||
temperature=generation_config.temperature, | ||
top_k=generation_config.top_k, | ||
top_p=generation_config.top_p, | ||
truncate=generation_config.truncate, | ||
typical_p=generation_config.typical_p, | ||
watermark=generation_config.watermark or False, | ||
decoder_input_details=generation_config.decoder_input_details, | ||
grammar=generation_config.grammar, | ||
) | ||
Comment on lines
+130
to
+147
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. is this needed ? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. IIRC, this is the interface Did you mean, is there a cleaner way to do this? |
||
|
||
return generated_text | ||
|
||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why is the grammar in the request here while it is defined in the generation config ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for the feedback @NathanHB 🙏🏻
I did it similarly to how I've seen it done in the same file. See here for an example.
Potentially the usage could be improved in a follow-up PR.
Lmk if I'm also missing something on my end