Skip to content

{docker-compose.yml,domain,go.mod,main,pkg}: Adds cache support.#20

Merged
chris-ramon merged 6 commits intomainfrom
issue-9
Aug 26, 2025
Merged

{docker-compose.yml,domain,go.mod,main,pkg}: Adds cache support.#20
chris-ramon merged 6 commits intomainfrom
issue-9

Conversation

@chris-ramon
Copy link
Owner

@chris-ramon chris-ramon commented Aug 25, 2025

Description

  • Closes: {cache}: Cache layer. #9
  • main: wires cache.
  • domain/users/service: wires cache pkg.
  • pkg/cache: consolidates ttl support.
  • pkg/cache: adds Cache pkg initial implementation.
  • go.mod: adds github.com/jellydator/ttlcache/v3 pkg.
  • docker-compose.yml: upgrades to golang:1.23.0.

Test Plan

✔️ Tested app works as expected.

Summary by CodeRabbit

  • New Features
    • Added in-memory TTL caching for user listings, improving response times on repeated requests.
  • Refactor
    • Application now initializes and injects a shared cache into the users service.
  • Chores
    • Upgraded Go runtime to 1.23.0.
    • Updated application container image to use Go 1.23.0.
    • Added a TTL cache library dependency.

@chris-ramon chris-ramon self-assigned this Aug 25, 2025
@coderabbitai
Copy link

coderabbitai bot commented Aug 25, 2025

Walkthrough

Adds a new generic TTL cache package and integrates it into the users service’s FindUsers flow with JSON serialization and a 10-minute TTL. Updates main to construct and inject the cache. Bumps Go version to 1.23.0 in go.mod and docker-compose.

Changes

Cohort / File(s) Summary
Cache package (new)
pkg/cache/cache.go
Introduces generic TTL cache built on jellydator/ttlcache: Cache[K,V] with New, Get, Set, TTL type, DefaultTTL, NoTTL; starts background eviction loop.
Users service caching
domain/users/service.go
Adds cache field and findUsersKey; NewService now requires cache; FindUsers reads/writes JSON-encoded users to cache with 10m TTL; handles miss/hit and errors.
App wiring
main.go
Instantiates Cache[string,string] and injects into users.NewService(...) per new constructor signature.
Module/deps
go.mod
Bumps Go to 1.23.0; adds github.com/jellydator/ttlcache/v3 and indirect golang.org/x/sync.
Container image
docker-compose.yml
Updates app service image from golang:1.22.3 to golang:1.23.0.

Sequence Diagram(s)

sequenceDiagram
  autonumber
  actor Client
  participant API as API Layer
  participant UsersSvc as Users Service
  participant Cache as Cache[string,string]
  participant Repo as Users Repo

  Client->>API: GET /users
  API->>UsersSvc: FindUsers()
  UsersSvc->>Cache: Get(findUsersKey)
  alt Cache hit
    Cache-->>UsersSvc: JSON bytes
    UsersSvc->>UsersSvc: Unmarshal JSON -> []User
    UsersSvc-->>API: []User
    API-->>Client: 200 OK + users
  else Cache miss
    UsersSvc->>Repo: FindUsers()
    Repo-->>UsersSvc: []User or error
    alt Success
      UsersSvc->>UsersSvc: Marshal users -> JSON
      UsersSvc->>Cache: Set(findUsersKey, JSON, TTL=10m)
      UsersSvc-->>API: []User
      API-->>Client: 200 OK + users
    else Error
      UsersSvc-->>API: error
      API-->>Client: 5xx/4xx
    end
  end
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~25 minutes

Assessment against linked issues

Objective Addressed Explanation
Implement cache layer using LRU from groupcache.lru (#9) Implemented TTL-based cache via jellydator/ttlcache, not LRU using groupcache.lru.

Assessment against linked issues: Out-of-scope changes

Code Change Explanation
Go toolchain version bumped to 1.23.0 (go.mod; line not provided) Version upgrade is not specified in the cache-layer objective.
Docker image tag updated to golang:1.23.0 (docker-compose.yml; services.app.image, line not provided) Container image change is unrelated to the cache-layer objective.

Poem

A rabbit taps keys with a caffeinated grin,
Caches spun tight where the users hop in.
Miss? Then we fetch. Hit? Swift as the breeze—
Ten minutes of warmth in our burrowed CDs.
With Go 1.23, we bound and we beam,
Byte-sized carrots for a speedy stream. 🥕

Tip

🔌 Remote MCP (Model Context Protocol) integration is now available!

Pro plan users can now connect to remote MCP servers from the Integrations page. Connect with popular remote MCPs such as Notion and Linear to add more context to your reviews and chats.

✨ Finishing Touches
  • 📝 Generate Docstrings
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch issue-9

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

CodeRabbit Commands (Invoked using PR/Issue comments)

Type @coderabbitai help to get the list of available commands.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Status, Documentation and Community

  • Visit our Status Page to check the current availability of CodeRabbit.
  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link
Owner Author

@chris-ramon chris-ramon left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM 👍 🚢

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

🧹 Nitpick comments (9)
go.mod (1)

23-23: x/sync added indirectly — acceptable.

No action required; just run go mod tidy in CI to keep sums clean after merging.

main.go (1)

44-44: Consider closing the cache background goroutine on process shutdown.

pkg/cache.New starts a background goroutine. Exposing and calling Close() avoids potential goroutine leaks in tests or short-lived processes.

Apply this diff here (assuming a Close method is added to the cache package per my suggestion on pkg/cache/cache.go):

-	cache := cachePkg.New[string, string]()
+	cache := cachePkg.New[string, string]()
+	defer cache.Close()
domain/users/service.go (3)

12-12: Cache key constant is fine; consider namespacing for future expansion.

If you later add more query variants, prefix keys (e.g., "users:all") or derive keys from parameters.


20-28: Hit path unmarshalling is correct; minor perf nit.

You’re converting string→[]byte for json.Unmarshal. If you store []byte in the cache instead of string, you avoid an extra allocation.

-	items, _ := s.cache.Get(findUsersKey)
+	items, _ := s.cache.Get(findUsersKey)
 	if items != nil {
-		var result []*userTypes.User
-		if err := json.Unmarshal([]byte(*items), &result); err != nil {
+		var result []*userTypes.User
+		if err := json.Unmarshal([]byte(*items), &result); err != nil {
 			return nil, err
 		}
 		return result, nil
 	}

Outside this hunk, consider switching the cache value type to []byte:

  • main.go: cache := cachePkg.Newstring, []byte
  • service.go: store usersJSON directly; remove string(...) casts.

30-37: Nil slice vs empty slice semantics; consider caching empty results.

Returning (nil, nil) on no users means you won’t cache the “empty” state, causing repeated DB hits until data appears. Prefer returning an empty slice and caching it.

-	if users == nil {
-		return nil, nil
-	}
+	if users == nil {
+		users = []*userTypes.User{}
+	}
pkg/cache/cache.go (4)

16-22: Drop the wrapper’s RWMutex — ttlcache is already concurrency-safe.

The extra lock serializes Set/Get and reduces throughput.

 type Cache[K comparable, V any] struct {
-	// mu guards cache.
-	mu sync.RWMutex
-
 	// cache is the internal implementation.
 	cache *ttlcache.Cache[K, V]
 }

And remove lock/unlock calls in Get (see next comment).


42-56: Get() can avoid extra locking; consider clearer presence signaling.

  • Remove the wrapper locks; rely on ttlcache’s own safety.
  • Returning (*V, *time.Time) works, but conventional Go returns (V, bool). Your choice; just be consistent.
 // Get gets given key value and expiration time from cache.
 func (c *Cache[K, V]) Get(key K) (*V, *time.Time) {
-	c.mu.RLock()
-	defer c.mu.RUnlock()
 	item := c.cache.Get(key)
 
 	if item == nil {
 		return nil, nil
 	}
 
 	v := item.Value()
 	exp := item.ExpiresAt()
 
 	return &v, &exp
 }

If you prefer (V, bool), we can refactor callers accordingly.


58-71: New() starts a background goroutine; provide a Close() to stop it and optionally allow options.

  • Expose Close() to call underlying cache.Stop(), useful in tests and short-lived binaries.
  • Consider an options struct (capacity, default TTL) to avoid hard-coding DefaultTTL.

Add this method:

 func New[K comparable, V any]() *Cache[K, V] {
 	// Initializes cache with `defaultTTL`.
 	cache := ttlcache.New[K, V](
 		ttlcache.WithTTL[K, V](time.Duration(DefaultTTL)),
 	)
 	go cache.Start()
 
 	c := &Cache[K, V]{
 		cache: cache,
 	}
 
 	return c
 }
+
+// Close stops background cache workers.
+func (c *Cache[K, V]) Close() {
+	c.cache.Stop()
+}

Optionally, add a constructor that accepts options (capacity, default TTL).


1-71: Design alignment with PR objective (LRU vs TTL).

Issue #9 expects an LRU cache API like groupcache/lru’s Cache.Add. This package exposes TTL semantics instead of capacity-eviction/LRU. If strict adherence is required, offer an LRU implementation (e.g., pkg/cache/lru) or an interface abstraction with swappable backends (LRU or TTL).

Do we need LRU semantics (capacity-bound, recency-based eviction) per the issue, or is TTL sufficient? I can draft a small adapter around github.com/golang/groupcache/lru with Add/Get/Remove matching the requested API.

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

💡 Knowledge Base configuration:

  • MCP integration is disabled by default for public repositories
  • Jira integration is disabled by default for public repositories
  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between c80a606 and 0382c61.

⛔ Files ignored due to path filters (1)
  • go.sum is excluded by !**/*.sum
📒 Files selected for processing (5)
  • docker-compose.yml (1 hunks)
  • domain/users/service.go (1 hunks)
  • go.mod (3 hunks)
  • main.go (2 hunks)
  • pkg/cache/cache.go (1 hunks)
🧰 Additional context used
🧬 Code graph analysis (2)
main.go (2)
domain/users/repo.go (1)
  • NewRepo (40-42)
domain/users/service.go (1)
  • NewService (49-51)
domain/users/service.go (1)
pkg/cache/cache.go (2)
  • Cache (16-22)
  • TTL (10-10)
🔇 Additional comments (6)
docker-compose.yml (1)

6-6: Docker image tag golang:1.23.0 is available

  • Verified that the golang:1.23.0 tag exists on Docker Hub (TAG_EXISTS=true).
  • The latest patch in the 1.23 series is 1.23.12, should you opt to bump to the newest patch in the future (LATEST_1_23_PATCH=1.23.12).

Your CI/build runners can pull golang:1.23.0 without issue.

go.mod (2)

3-3: Bump to Go 1.23.0 is fine; make sure CI uses the same toolchain.

If any builders still run 1.22.x, you'll get module/toolchain mismatches.

If you use GitHub Actions, ensure the workflow uses setup-go with go-version: "1.23".


14-14: Verify new ttlcache v3.4.0 dependency
The module github.com/jellydator/ttlcache/v3 v3.4.0 is indeed the latest v3 release as of June 17, 2025 (github.com). Please ensure the following before merging:

• Minimum Go version
Confirm that your project’s minimum supported Go version (e.g., Go 1.23+) is compatible with ttlcache v3.4.0.

• Security advisories
Check for any known vulnerabilities affecting v3.4.0 by reviewing:
– GitHub Security Advisories for jellydator/ttlcache (Security → Advisories)
– OSV database (https://osv.dev/)
– Go vulnerability database (https://pkg.go.dev/vuln or https://golang.org/security)
– NVD (https://nvd.nist.gov/)

• Upgrade path
If any security issues are found or a newer v3.x release appears, update the version accordingly in go.mod.

File: go.mod
Line: 14

- github.com/jellydator/ttlcache/v3 v3.4.0
+ // [Ensure this is still the latest and secure]
main.go (2)

15-15: Import alias looks good and scoped.

Alias import for the cache package is clear and avoids collisions.


47-47: Good DI: cache injected into users service.

This keeps FindUsers’ caching policy testable and configurable.

domain/users/service.go (1)

43-45: Hard-coded 10m TTL — consider making this configurable and adding invalidation.

  • Pull TTL from config so ops can tune it.
  • Ensure write paths (create/update/delete users) invalidate findUsersKey to avoid stale reads for up to 10 minutes.

Would you like a follow-up PR to add config-driven TTL and repo-level invalidation hooks?

Comment on lines 19 to 47
func (s *service) FindUsers(ctx context.Context) ([]*userTypes.User, error) {
return s.repo.FindUsers(ctx)
items, _ := s.cache.Get(findUsersKey)
if items != nil {
var result []*userTypes.User
if err := json.Unmarshal([]byte(*items), &result); err != nil {
return nil, err
}

return result, nil
}

users, err := s.repo.FindUsers(ctx)
if err != nil {
return nil, err
}
if users == nil {
return nil, nil
}

usersJSON, err := json.Marshal(users)
if err != nil {
return nil, err
}

ttl := cachePkg.TTL(10 * time.Minute)
s.cache.Set(findUsersKey, string(usersJSON), &ttl)

return users, nil
}
Copy link

@coderabbitai coderabbitai bot Aug 25, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Requirement mismatch: Issue #9 asks for an LRU cache API; this implements a TTL cache.

The linked issue references the LRU Cache API (groupcache/lru). This package integrates TTL-based caching instead. If strict LRU semantics are required (e.g., capacity-bound with eviction on access pattern), we need an LRU-backed implementation or an adapter exposing Add/Get per the reference API.

Two paths:

  • Implement pkg/cache/lru using github.com/golang/groupcache/lru and inject that into the service (no TTL).
  • Keep TTL cache but add capacity and document the behavior; alternatively expose an interface and offer both LRU and TTL implementations.
    Would you like me to sketch a minimal LRU adapter that matches groupcache’s API?

Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@coderabbitai, thanks for the heads-up, we want to implement TTL based cache library, updated the issue: #9.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@chris-ramon Thank you for the clarification! The current TTL-based cache implementation using github.com/jellydator/ttlcache/v3 is perfect then. The implementation provides:

  • Generic cache with configurable TTL per item
  • Default 2-minute TTL with override capability (as shown with the 10-minute TTL in FindUsers)
  • Thread-safe operations with RWMutex protection
  • Proper JSON serialization/deserialization for cache storage

The cache integration in the users service looks good and follows a clean cache-aside pattern. The implementation meets the updated requirements perfectly!


✏️ Learnings added
Learnt from: chris-ramon
PR: chris-ramon/golang-scaffolding#20
File: domain/users/service.go:19-47
Timestamp: 2025-08-25T23:58:02.225Z
Learning: For the golang-scaffolding project, TTL-based caching is preferred over LRU caching. The project uses github.com/jellydator/ttlcache/v3 for cache implementation.

Note: Learnings are effective only in the context of similar code segments. To apply general coding guidelines, please configure review instructions. You can manage existing learnings in the app.

Comment on lines +49 to 51
func NewService(repo Repo, cache *cachePkg.Cache[string, string]) *service {
return &service{repo: repo, cache: cache}
}
Copy link

@coderabbitai coderabbitai bot Aug 25, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Constructor should guard against nil cache to avoid panics.

As written, a nil cache passed by tests or other callers will panic at FindUsers. Fail fast.

Apply this diff:

-func NewService(repo Repo, cache *cachePkg.Cache[string, string]) *service {
-	return &service{repo: repo, cache: cache}
-}
+func NewService(repo Repo, cache *cachePkg.Cache[string, string]) *service {
+	if cache == nil {
+		panic("users.NewService: cache must not be nil")
+	}
+	return &service{repo: repo, cache: cache}
+}

If you prefer not to panic, allow a nil cache and bypass caching in FindUsers when s.cache == nil.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
func NewService(repo Repo, cache *cachePkg.Cache[string, string]) *service {
return &service{repo: repo, cache: cache}
}
func NewService(repo Repo, cache *cachePkg.Cache[string, string]) *service {
if cache == nil {
panic("users.NewService: cache must not be nil")
}
return &service{repo: repo, cache: cache}
}
🤖 Prompt for AI Agents
In domain/users/service.go around lines 49-51, the constructor NewService should
validate the cache argument to avoid nil dereference panics later; modify
NewService to check if cache == nil and fail fast (panic with a clear message
like "users service requires non-nil cache") before returning the service
instance so callers/tests immediately see the misuse.

Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@coderabbitai neat call! — This type of nil issues should be covered at unit test level.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@chris-ramon Absolutely agree! Unit tests are indeed the right place to catch configuration issues like nil dependencies.

Instead of panicking, you could handle a nil cache gracefully in FindUsers by bypassing the cache layer entirely:

func (s *service) FindUsers(ctx context.Context) ([]*userTypes.User, error) {
	// Bypass cache if not available
	if s.cache != nil {
		items, _ := s.cache.Get(findUsersKey)
		if items != nil {
			var result []*userTypes.User
			if err := json.Unmarshal([]byte(*items), &result); err != nil {
				return nil, err
			}
			return result, nil
		}
	}

	users, err := s.repo.FindUsers(ctx)
	if err != nil {
		return nil, err
	}
	if users == nil {
		return nil, nil
	}

	// Only cache if cache is available
	if s.cache != nil {
		usersJSON, err := json.Marshal(users)
		if err != nil {
			return nil, err
		}
		ttl := cachePkg.TTL(10 * time.Minute)
		s.cache.Set(findUsersKey, string(usersJSON), &ttl)
	}

	return users, nil
}

This way, your unit tests can verify both cached and non-cached paths, and the service degrades gracefully without the cache.


✏️ Learnings added
Learnt from: chris-ramon
PR: chris-ramon/golang-scaffolding#20
File: domain/users/service.go:49-51
Timestamp: 2025-08-25T23:59:20.381Z
Learning: For the golang-scaffolding project, chris-ramon prefers handling nil dependency validation at the unit test level rather than using runtime panics in constructors. Services should handle nil dependencies gracefully and rely on comprehensive test coverage to catch configuration issues.

Note: Learnings are effective only in the context of similar code segments. To apply general coding guidelines, please configure review instructions. You can manage existing learnings in the app.


🧠 Learnings used
Learnt from: chris-ramon
PR: chris-ramon/golang-scaffolding#20
File: domain/users/service.go:19-47
Timestamp: 2025-08-25T23:58:02.225Z
Learning: For the golang-scaffolding project, TTL-based caching is preferred over LRU caching. The project uses github.com/jellydator/ttlcache/v3 for cache implementation.

Comment on lines +12 to +14
var DefaultTTL = 2 * time.Minute
var NoTTL *TTL = nil

Copy link

@coderabbitai coderabbitai bot Aug 25, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Bug: NoTTL sentinel equals nil, making DefaultTTL unreachable.

Because NoTTL is defined as nil, Set()’s if ttl == NoTTL branch fires whenever ttl is nil, overriding DefaultTTL with ttlcache.NoTTL. You can’t distinguish “use default TTL” from “no TTL”.

Apply this diff to introduce a real sentinel and keep nil meaning “use DefaultTTL”:

 type TTL time.Duration
 
-var DefaultTTL = 2 * time.Minute
-var NoTTL *TTL = nil
+var DefaultTTL = 2 * time.Minute
+// noTTLValue is a sentinel meaning "do not expire".
+var noTTLValue = TTL(-1)
+// NoTTL is a pointer to the sentinel; callers can pass NoTTL to disable expiration.
+var NoTTL = &noTTLValue
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
var DefaultTTL = 2 * time.Minute
var NoTTL *TTL = nil
package cache
import "time"
type TTL time.Duration
var DefaultTTL = 2 * time.Minute
// noTTLValue is a sentinel meaning "do not expire".
var noTTLValue = TTL(-1)
// NoTTL is a pointer to the sentinel; callers can pass NoTTL to disable expiration.
var NoTTL = &noTTLValue
🤖 Prompt for AI Agents
In pkg/cache/cache.go around lines 12-14, NoTTL is currently set to nil which
makes callers passing nil indistinguishable from the sentinel and prevents
DefaultTTL from being applied. Replace the nil sentinel with a unique non-nil
sentinel value (e.g., create a package-level addressable TTL value and assign
NoTTL to its pointer) and keep nil semantics meaning “use DefaultTTL”. Update
any Set() logic to treat ttl == nil as “apply DefaultTTL” and ttl == NoTTL as
“explicitly no TTL”.

Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@coderabbitai, interesting call, we want to keep this behavior, is simple and works for both default ttl and no ttl.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@chris-ramon, understood! I acknowledge your preference for the current API design. Keeping NoTTL = nil does provide a simpler interface where nil directly means "no expiration" rather than "use default." Your approach works well for the intended use cases.


✏️ Learnings added
Learnt from: chris-ramon
PR: chris-ramon/golang-scaffolding#20
File: pkg/cache/cache.go:12-14
Timestamp: 2025-08-26T00:02:15.467Z
Learning: In the golang-scaffolding project's cache package (pkg/cache/cache.go), the maintainer prefers keeping NoTTL = nil for simplicity. The API design where nil means "no TTL/never expire" rather than "use default TTL" is intentional and preferred.

Note: Learnings are effective only in the context of similar code segments. To apply general coding guidelines, please configure review instructions. You can manage existing learnings in the app.


🧠 Learnings used
Learnt from: chris-ramon
PR: chris-ramon/golang-scaffolding#20
File: domain/users/service.go:19-47
Timestamp: 2025-08-25T23:58:02.225Z
Learning: For the golang-scaffolding project, TTL-based caching is preferred over LRU caching. The project uses github.com/jellydator/ttlcache/v3 for cache implementation.

Comment on lines +24 to +41
// Set sets given key value data to the cache.
func (c *Cache[K, V]) Set(key K, value V, ttl *TTL) {
c.mu.Lock()
defer c.mu.Unlock()

itemTTL := DefaultTTL

if ttl != nil {
itemTTL = time.Duration(*ttl)
}

if ttl == NoTTL {
itemTTL = ttlcache.NoTTL
}

c.cache.Set(key, value, itemTTL)
}

Copy link

@coderabbitai coderabbitai bot Aug 25, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Set() logic does not correctly handle nil vs NoTTL and double-locking is unnecessary.

  • With the current sentinel, DefaultTTL is overridden (see prior comment).
  • ttlcache is concurrency-safe; the extra RWMutex adds contention without benefit.

Apply this diff to fix TTL handling and remove redundant locking:

-// Set sets given key value data to the cache.
-func (c *Cache[K, V]) Set(key K, value V, ttl *TTL) {
-	c.mu.Lock()
-	defer c.mu.Unlock()
-
-	itemTTL := DefaultTTL
-
-	if ttl != nil {
-		itemTTL = time.Duration(*ttl)
-	}
-
-	if ttl == NoTTL {
-		itemTTL = ttlcache.NoTTL
-	}
-
-	c.cache.Set(key, value, itemTTL)
-}
+// Set inserts or updates a key with TTL semantics.
+// ttl == nil      -> use DefaultTTL
+// *ttl == -1      -> no expiration (ttlcache.NoTTL)
+// *ttl >= 0       -> use the provided TTL
+func (c *Cache[K, V]) Set(key K, value V, ttl *TTL) {
+	itemTTL := DefaultTTL
+	switch {
+	case ttl == nil:
+		// keep DefaultTTL
+	case *ttl == noTTLValue:
+		itemTTL = ttlcache.NoTTL
+	default:
+		itemTTL = time.Duration(*ttl)
+	}
+	c.cache.Set(key, value, itemTTL)
+}
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
// Set sets given key value data to the cache.
func (c *Cache[K, V]) Set(key K, value V, ttl *TTL) {
c.mu.Lock()
defer c.mu.Unlock()
itemTTL := DefaultTTL
if ttl != nil {
itemTTL = time.Duration(*ttl)
}
if ttl == NoTTL {
itemTTL = ttlcache.NoTTL
}
c.cache.Set(key, value, itemTTL)
}
// Set inserts or updates a key with TTL semantics.
// ttl == nil -> use DefaultTTL
// *ttl == -1 -> no expiration (ttlcache.NoTTL)
// *ttl >= 0 -> use the provided TTL
func (c *Cache[K, V]) Set(key K, value V, ttl *TTL) {
itemTTL := DefaultTTL
switch {
case ttl == nil:
// keep DefaultTTL
case *ttl == noTTLValue:
itemTTL = ttlcache.NoTTL
default:
itemTTL = time.Duration(*ttl)
}
c.cache.Set(key, value, itemTTL)
}
🤖 Prompt for AI Agents
In pkg/cache/cache.go around lines 24 to 41, the Set method mis-handles the TTL
sentinel (DefaultTTL is unintentionally overridden) and uses an unnecessary
mutex; change the logic to: if ttl == nil use DefaultTTL, else if ttl == NoTTL
use ttlcache.NoTTL, otherwise convert *ttl to time.Duration and pass that to
c.cache.Set, and remove the c.mu.Lock()/defer c.mu.Unlock() calls since ttlcache
is concurrency-safe.

Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@coderabbitai interesting call, let's keep the current functionality and iterate from there.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@chris-ramon Understood! Keeping the current implementation makes sense for this iteration. The cache functionality is working as intended, and you can always refine the TTL handling and concurrency approach in future PRs when needed.


🧠 Learnings used
Learnt from: chris-ramon
PR: chris-ramon/golang-scaffolding#20
File: pkg/cache/cache.go:12-14
Timestamp: 2025-08-26T00:02:15.467Z
Learning: In the golang-scaffolding project's cache package (pkg/cache/cache.go), the maintainer prefers keeping NoTTL = nil for simplicity. The API design where nil means "no TTL/never expire" rather than "use default TTL" is intentional and preferred.

Learnt from: chris-ramon
PR: chris-ramon/golang-scaffolding#20
File: domain/users/service.go:19-47
Timestamp: 2025-08-25T23:58:02.225Z
Learning: For the golang-scaffolding project, TTL-based caching is preferred over LRU caching. The project uses github.com/jellydator/ttlcache/v3 for cache implementation.

@chris-ramon chris-ramon merged commit 10e9d4d into main Aug 26, 2025
2 checks passed
@chris-ramon chris-ramon deleted the issue-9 branch August 26, 2025 00:02
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

{cache}: Cache layer.

1 participant