Skip to content

Conversation

Pouyanpi
Copy link
Collaborator

@Pouyanpi Pouyanpi commented Oct 17, 2025

Extends the cache system to store and restore LLM metadata (model name and provider name) alongside cache entries. This allows cached results to maintain provenance information about which model and provider generated the original response.

Changes

  • Added LLMMetadataDict and LLMCacheData TypedDict definitions for type safety
  • Extended CacheEntry to include optional llm_metadata field
  • Implemented extract_llm_metadata_for_cache() to capture model and provider info from context
  • Implemented restore_llm_metadata_from_cache() to restore metadata when retrieving cached results
  • Updated get_from_cache_and_restore_stats() to handle metadata extraction and restoration
  • Added comprehensive test coverage for metadata caching functionality

Dependencies

Part of Stack

This is PR 2/5 in the NeMoGuards caching feature stack.

@codecov-commenter
Copy link

Codecov Report

✅ All modified and coverable lines are covered by tests.

📢 Thoughts on this report? Let us know!

@Pouyanpi Pouyanpi force-pushed the feat/cache-interface-cleanup branch from 053cd1c to cb827ae Compare October 17, 2025 10:38
@Pouyanpi Pouyanpi force-pushed the feat/cache-llm-metadata branch from b05cac4 to e725d77 Compare October 17, 2025 10:39
@Pouyanpi Pouyanpi added this to the v0.18.0 milestone Oct 17, 2025
@Pouyanpi Pouyanpi self-assigned this Oct 17, 2025
Base automatically changed from feat/cache-interface-cleanup to develop October 17, 2025 14:47
Copy link
Collaborator

@tgasser-nv tgasser-nv left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good, just a few cleanup nits in the tests to address before merging

Extends the cache system to store and restore LLM metadata (model name
and provider name) alongside cache entries. This allows cached results
to maintain provenance information about which model and provider
generated the original response.

- Added LLMMetadataDict and LLMCacheData TypedDict definitions for type
safety
  - Extended CacheEntry to include optional llm_metadata field
  - Implemented extract_llm_metadata_for_cache() to capture model and
provider info from context
  - Implemented restore_llm_metadata_from_cache() to restore metadata
when retrieving cached results
  - Updated get_from_cache_and_restore_stats() to handle metadata
extraction and restoration
  - Added comprehensive test coverage for metadata caching functionalit
@Pouyanpi Pouyanpi force-pushed the feat/cache-llm-metadata branch from e725d77 to fd873b7 Compare October 19, 2025 10:00
@Pouyanpi Pouyanpi merged commit 32d57f5 into develop Oct 19, 2025
7 checks passed
@Pouyanpi Pouyanpi deleted the feat/cache-llm-metadata branch October 19, 2025 10:08
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants