feat: Add pagination and reduce token usage in responses #5
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Summary
This PR addresses a significant issue with token consumption when using the TrueNAS MCP server with LLM clients. Initial list operations were consuming 40k+ tokens due to:
Changes
Pagination Support (all list operations):
limitparameter (default: 100, max: 500)offsetparameter for paginationpaginationmetadata:{total, limit, offset, returned, has_more}Affected tools:
list_pools,list_datasetslist_snapshots,list_snapshot_taskslist_apps,list_instances,list_legacy_vmslist_smb_shares,list_nfs_exports,list_iscsi_targetslist_usersOpt-in Raw Response:
get_app,get_instance,get_legacy_vmnow excluderawfield by defaultinclude_rawparameter (default: false) to restore if needed for debuggingDataset Control:
include_childrenparameter tolist_datasetsandget_dataset(default: true)This PR introduces intentional breaking changes to reduce token usage:
get_app,get_instance,get_legacy_vm: No longer includerawfield by defaultinclude_raw=trueto restore previous behaviorAll list operations: Now paginated by default (limit=100)
limit=500for larger result sets, or implement paginationRationale
When using this MCP server with Claude or other LLM clients, excessive response sizes directly impact:
With environments containing hundreds of snapshots, datasets, or users, the previous behavior was impractical for LLM use cases. These changes make the server significantly more efficient while maintaining full functionality through opt-in parameters.
Documentation
Updated README.md with a new "Pagination and Response Control" section documenting:
include_rawusage guidelinesinclude_childrenfor datasetsTest plan
limitandoffsetparameters work correctlyinclude_raw=truereturns full API responseinclude_children=falseexcludes child datasets