-
Notifications
You must be signed in to change notification settings - Fork 64
Open
Description
I'd like to propose adding an estimated token count to the generated output. This would help users know if their generated text fits within their LLM's context window limits.
Proposed Feature:
- Add an estimated token count at the beginning of both
llms.txtandllms-full.txtfiles - Display format could be something like:
Estimated Tokens: 12,345
Why this would be useful:
- Helps users immediately know if the generated text will fit their LLM's context window
- Prevents trial-and-error when loading large text files into LLMs
- Makes it easier to split content into appropriate chunk sizes if needed
Implementation Suggestions:
- Could use libraries like
tiktokenor a simple character-based approximation - Token count could be placed in a header section or metadata block at the start of the file
Metadata
Metadata
Assignees
Labels
No labels