Skip to content

Commit b014f72

Browse files
Fix for <end_of_turn> token during inference (#622)
Fix for this JIRA from Imagine team Signed-off-by: Ann Kuruvilla <[email protected]>
1 parent f4ff803 commit b014f72

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

examples/gemma3_example/gemma3_mm.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -105,5 +105,5 @@
105105
)
106106
inputs["pixel_values"] = inputs["pixel_values"].to(torch.float32)
107107
output = qeff_model.generate(inputs=inputs, generation_len=100)
108-
print(tokenizer.batch_decode(output.generated_ids))
108+
print(tokenizer.batch_decode(output.generated_ids, skip_special_tokens=True))
109109
print(output)

0 commit comments

Comments
 (0)