Conversation
- Added functions to generate unique cache keys and prefixes, supporting both single and composite unique keys. - Implemented caching mechanisms for unique keys in the Gorm2Cache, including batch operations for setting and invalidating unique cache. - Updated query handling to utilize unique cache where applicable, improving cache hit rates. - Enhanced tests for unique key generation and caching logic to ensure correctness and reliability.
…cading cancel
Singleflight was using the leader request's context for the shared DB/cache
work. When that request's context was cancelled or timed out (e.g. client
disconnect), the same error was returned to all waiters for the same key
("one cancel, everyone errors").
- Have the leader run the shared query with a dedicated background context
(context.WithTimeout(context.Background(), 30s)) so cancellation of the
first request does not affect other waiters.
- Call the context's cancel in fillCallAfterQuery to avoid timer leaks.
- Add TestSingleFlight_ContextCancel_DoesNotAffectWaiters: uses
singleFlightLeaderDelayForTest to reliably reproduce the bug (waiter gets
context.DeadlineExceeded without fix) and to verify the fix.
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
📝 WalkthroughWalkthroughAdds unique-index cache support (generation, lookup, write, invalidation), extends key helpers for composite/map keys, integrates unique-cache handling into query and mutation hooks, and adds extensive unit and Docker-based integration tests plus Go/GORM dependency updates. Changes
Sequence Diagram(s)sequenceDiagram
participant Client as Client
participant Before as BeforeQuery
participant UCache as UniqueCache
participant PCache as PrimaryCache
participant DB as Database
participant After as AfterQuery
Client->>Before: Execute query (with WHERE)
Before->>Before: Extract unique keys from WHERE
alt unique keys found
Before->>UCache: BatchGetUniqueCache(uniqueKeys)
UCache-->>Before: cached payload(s) or miss
alt unique cache hit
Before-->>Client: Return cached rows
else
Before->>PCache: Try primary cache (leader bg context)
end
else
Before->>PCache: Try primary cache (leader bg context)
end
alt cache miss
Before->>DB: Execute query
DB-->>After: Return rows
After->>After: Extract primary & unique keys from results
After->>UCache: BatchSetUniqueCache(uniqueKeys, values)
After->>PCache: Update primary cache
After-->>Client: Return rows
end
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Poem
🚥 Pre-merge checks | ✅ 1 | ❌ 2❌ Failed checks (1 warning, 1 inconclusive)
✅ Passed checks (1 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 10
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
cache/query.go (1)
141-143:⚠️ Potential issue | 🟡 MinorUse the leader context consistently for cache reads.
These cache calls still use the original
ctxcaptured before Line 122, so a canceled leader request context can still force cache-read failures.🔧 Proposed fix
- cacheValues, err := cache.BatchGetPrimaryCache(ctx, tableName, primaryKeys) + cacheValues, err := cache.BatchGetPrimaryCache(db.Statement.Context, tableName, primaryKeys) ... - cacheValues, err := cache.BatchGetUniqueCache(ctx, tableName, uniqueIndexName, uniqueKeys) + cacheValues, err := cache.BatchGetUniqueCache(db.Statement.Context, tableName, uniqueIndexName, uniqueKeys) ... - cacheValue, err := cache.GetSearchCache(ctx, tableName, sql, db.Statement.Vars...) + cacheValue, err := cache.GetSearchCache(db.Statement.Context, tableName, sql, db.Statement.Vars...)Also applies to: 197-200, 232-236
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cache/query.go` around lines 141 - 143, The cache reads are using the original ctx instead of the leader request context, which can cause failures when the original ctx is canceled; update all cache read calls (e.g., cache.BatchGetPrimaryCache and their error logging via cache.Logger.CtxError) to use the leaderCtx variable instead of ctx (same change also required for the other occurrences noted around the other cache read sites), and ensure the CtxError calls pass leaderCtx so logs and reads consistently use the leader request context.
🧹 Nitpick comments (6)
util/key_test.go (1)
83-84: Tighten map-based key tests to assert exact deterministic output.Right now multi-field cases only check
Contains, so ordering bugs can still pass. These tests should compare the full expected key string.🧪 Proposed test tightening
- expected: "gormcache:test123:p:users:1:2", // sorted by field name: role_id, user_id + expected: "gormcache:test123:p:users:2:1", // sorted by field name: role_id, user_id ... - // 对于单个key,验证完整匹配 - if len(tt.primaryKeyMap) == 1 { - if key != tt.expected { - t.Errorf("expected %s, got %s", tt.expected, key) - } - } else { - // 对于多个key,验证包含所有值 - for _, val := range tt.primaryKeyMap { - if !strings.Contains(key, val) { - t.Errorf("key should contain value %s, got %s", val, key) - } - } - } + if key != tt.expected { + t.Errorf("expected %s, got %s", tt.expected, key) + } ... - // 对于单个key,验证完整匹配 - if len(tt.uniqueKeyMap) == 1 { - if key != tt.expected { - t.Errorf("expected %s, got %s", tt.expected, key) - } - } else { - // 对于多个key,验证包含所有值 - for _, val := range tt.uniqueKeyMap { - if !strings.Contains(key, val) { - t.Errorf("key should contain value %s, got %s", val, key) - } - } - } + if key != tt.expected { + t.Errorf("expected %s, got %s", tt.expected, key) + }Also applies to: 96-120, 389-390, 402-426
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@util/key_test.go` around lines 83 - 84, The map-based key tests in util/key_test.go use Contains for multi-field cases which allows nondeterministic ordering to pass; update those assertions to assert exact equality of the full generated key string (e.g., replace Contains checks with equality assertions comparing to "gormcache:test123:p:users:1:2") for the cases around the shown diff and also in the other affected ranges (96-120, 389-390, 402-426), ensuring the test functions that build keys (the map-based key test functions in this file) compare the complete expected key string rather than using Contains.cache/helpers_test.go (1)
378-414:TestGetAllUniqueIndexescurrently validates scaffolding, not behavior.The “with unique indexes” case still expects
0, so regressions in unique-index extraction won’t be caught.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cache/helpers_test.go` around lines 378 - 414, Update TestGetAllUniqueIndexes to assert real behavior by constructing a schema that contains at least one unique index (instead of relying on nil/placeholder and expecting 0). Create a schema.Schema instance with its Indexes populated (either by calling ParseIndexes with a minimal valid schema input or by directly setting the Schema.Indexes to include a unique index entry), then call getAllUniqueIndexes and assert the returned slice length matches the expectedCount and that expectedNames (the unique index names) are present; reference TestGetAllUniqueIndexes, getAllUniqueIndexes, schema.Schema and ParseIndexes to locate where to change the test.cache/dockertest_integration_test.go (2)
126-136: Table cleanup outside sync.Once may cause race conditions.The
DROP TABLEandAutoMigratecalls (lines 127-133) run on everysetupMySQLcall, while the Docker container setup runs only once. If multiple tests callsetupMySQLconcurrently (e.g., viat.Parallel()), there could be race conditions on table operations.Currently tests are not parallel, but this is worth noting if parallelism is added later.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cache/dockertest_integration_test.go` around lines 126 - 136, The table DROP and AutoMigrate steps in setupMySQL (the calls to mysqlDB.Exec("DROP TABLE IF EXISTS ...") and mysqlDB.AutoMigrate(...)) run on every invocation and can race when tests run concurrently; move these schema setup/cleanup steps into the same sync.Once block that initializes the Docker container (or protect them with a package-level mutex) so schema changes run only once during test setup; update the setupMySQL function to perform only per-call DB handle retrieval while the sync.Once-initialized block handles DROP TABLE and AutoMigrate (referencing setupMySQL, the DROP TABLE Exec call, and AutoMigrate call to locate the changes).
628-629: Using time.Sleep for cache consistency may cause flaky tests.Multiple places use
time.Sleep(50-200ms)to wait for cache invalidation. This approach can be flaky in CI environments with varying load.Consider implementing a polling mechanism or using synchronization primitives if the cache provides hooks for completion callbacks.
Also applies to: 666-667, 693-694, 716-717, 755-756, 797-798, 828-829, 840-841, 865-866
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cache/dockertest_integration_test.go` around lines 628 - 629, Replace fragile time.Sleep(...) uses in dockertest_integration_test.go with a deterministic polling/synchronization helper: remove direct calls to time.Sleep(50-200*time.Millisecond) and instead implement and call a helper like waitForCondition or waitForCacheConsistency that accepts a short interval, overall timeout, and a predicate function that checks the cache state (e.g., presence/absence of key or expected value). Update every occurrence (the lines with time.Sleep and tests that rely on cache invalidation) to call this helper and fail the test if the predicate does not become true within the timeout; if the cache library exposes hooks/callbacks, call those instead of polling. Ensure the helper is reused across all listed locations to avoid flaky CI timing assumptions.cache/helpers.go (1)
201-237: ParseIndexes() called multiple times may impact performance.
getUniqueIndexFields(line 206) andgetAllUniqueIndexes(line 226) both calls.ParseIndexes(). In hot paths where these are called frequently, this could be a performance concern asParseIndexes()may involve reflection.Consider caching the parsed indexes or passing the result between functions:
-func getUniqueIndexFields(s *schema.Schema, indexName string) []*schema.Field { +func getUniqueIndexFields(s *schema.Schema, indexName string, cachedIndexes map[string]*schema.Index) []*schema.Field { if s == nil { return nil } - allIndexes := s.ParseIndexes() + allIndexes := cachedIndexes + if allIndexes == nil { + allIndexes = getAllUniqueIndexes(s) + }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cache/helpers.go` around lines 201 - 237, Both getUniqueIndexFields and getAllUniqueIndexes call s.ParseIndexes() causing duplicate work; change the design to parse once and reuse the result by either (A) adding an overload/helper like parseIndexesOnce(s *schema.Schema) that caches the parsed result on the schema or in a package-level cache, or (B) change signatures to accept the parsed indexes (e.g., pass the slice/map returned by s.ParseIndexes() into getUniqueIndexFields and getAllUniqueIndexes) so callers compute s.ParseIndexes() once and pass it through; update callers to use the new helper/signature and ensure thread-safety if you introduce caching.cache/cache.go (1)
192-199: Mutating the input slice - potential side effect.
BatchSetUniqueCachemodifies thekvsslice in place (line 196), which mutates the caller's slice. This mirrorsBatchSetPrimaryKeyCache(line 154), so it's consistent, but callers should be aware of this behavior.Consider documenting or avoiding mutation
If mutation is intentional for performance, document it. Otherwise, create a copy:
func (c *Gorm2Cache) BatchSetUniqueCache(ctx context.Context, tableName string, uniqueIndexName string, kvs []util.Kv) error { - for idx, kv := range kvs { + newKvs := make([]util.Kv, len(kvs)) + for idx, kv := range kvs { // kv.Key 已经是最终格式(单个值或已用":"连接的联合unique键),直接传入 - kvs[idx].Key = util.GenUniqueCacheKey(c.InstanceId, tableName, uniqueIndexName, kv.Key) + newKvs[idx] = util.Kv{ + Key: util.GenUniqueCacheKey(c.InstanceId, tableName, uniqueIndexName, kv.Key), + Value: kv.Value, + } } - return c.cache.BatchSetKeys(ctx, kvs) + return c.cache.BatchSetKeys(ctx, newKvs) }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cache/cache.go` around lines 192 - 199, BatchSetUniqueCache currently mutates the caller's kvs slice by rewriting kvs[idx].Key using util.GenUniqueCacheKey; to fix, either clearly document this in the function comment (on Gorm2Cache.BatchSetUniqueCache) that the input slice is mutated, or avoid the side-effect by creating a new slice (e.g., make([]util.Kv, len(kvs)) and copy each element while setting newKey via util.GenUniqueCacheKey) and pass that new slice to c.cache.BatchSetKeys(ctx, newSlice); ensure references to util.GenUniqueCacheKey and c.cache.BatchSetKeys remain unchanged.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@cache/afterDelete.go`:
- Around line 73-85: When db.Statement.Schema is nil the code currently skips
invalidating unique-key fallbacks; modify the branch in AfterDelete so that when
db.Statement.Schema == nil you still retrieve the list of unique index names
(e.g., via a new or existing helper like cache.GetUniqueIndexNames(tableName) or
cache.ListUniqueIndexes(tableName)) and loop over those index names calling
cache.InvalidateAllUniqueCache(ctx, tableName, indexName) with the same
CtxInfo/CtxError logging as the schema-path; keep using
getAllUniqueIndexes(db.Statement.Schema) when Schema is present, but fall back
to the cache-based index name lookup when Schema is nil so unique caches are
invalidated even for schema-less deletes.
In `@cache/afterUpdate.go`:
- Around line 75-87: The AfterUpdate hook currently skips unique-cache
invalidation when db.Statement.Schema is nil; mirror the delete-hook fix by
adding the same schema-nil fallback before calling getAllUniqueIndexes: if
db.Statement.Schema is nil, derive the schema from the statement/model using the
same helper logic used in the delete hook (the code path you added there), then
call getAllUniqueIndexes(schema) and loop over indexName calling
cache.InvalidateAllUniqueCache(ctx, tableName, indexName) with the same logging
(cache.Logger.CtxInfo/CtxError) as present now so unique-cache fallback
invalidation runs even when db.Statement.Schema is nil.
In `@cache/dockertest_integration_test.go`:
- Around line 73-136: The setupMySQL function uses setupMySQLOnce.Do and
currently calls t.Fatalf inside the Do closure, which can mark the Once as done
while leaving mysqlDB nil; change this to capture any initialization error into
a package-local variable (e.g., setupMySQLError) instead of calling t.Fatalf
inside the closure, set mysqlDB/mysqlPool/mysqlResource as before, then after
setupMySQLOnce.Do returns check setupMySQLError and call t.Fatalf (or
t.Skip/require) from the test context if non-nil; update references to
setupMySQLOnce, setupMySQLError, setupMySQL, mysqlDB, mysqlPool, and
mysqlResource accordingly so initialization failures are reported per-test and
subsequent tests will retry initialization.
In `@cache/helpers.go`:
- Around line 314-354: The composite unique key loop currently uses min-length
aligning to build keys; change it to compute the Cartesian product of value
lists for each non-nil field in index.Fields: for each fieldOption.Field gather
its values := fieldValuesMap[fieldOption.Field.DBName] (skip fields with nil
Field or empty values), then generate all combinations (recursive or iterative
product) of those slices, join each combination with ":" into uniqueKeys,
deduplicate via uniqueStringSlice and assign result[indexName] =
uniqueStringSlice(uniqueKeys); update logic around variables index, indexName,
fieldOption, fieldValuesMap, uniqueKeys to reflect the product approach.
- Around line 100-126: The composite key logic in the block using
primaryKeyFields and fieldValuesMap incorrectly takes the minimum slice length
and never generates full combinations; replace that logic with a
Cartesian-product generation across all fieldValuesMap[field.DBName] slices
(e.g., implement a helper like generateCartesianProduct that takes [][]string
and returns [][]string combinations), then for each combination join parts with
":" to form keys (the same shape as current keyParts building) and return
uniqueStringSlice(keys); update the code paths that reference primaryKeyFields,
fieldValuesMap, and uniqueStringSlice to use the new product-based key
generation instead of the min-length + last-value fallback.
- Around line 559-579: The composite-key loop incorrectly handles nil ValueOf
and over-writes allZero inside the loop; update the logic in the block iterating
primaryKeyFields so that if valueOf == nil you mark the row as incomplete (e.g.,
set a local incomplete flag and break) to avoid building a partial key, remove
the redundant assignment allZero = true inside the loop, and only set allZero =
false when you successfully extract a non-zero primaryKey; after the loop skip
(continue) when incomplete or allZero remains true, and append
strings.Join(keyParts, ":") to primaryKeys only when you have a complete,
non-zero composite key (variables to change: primaryKeyFields, valueOf,
primaryKey, allZero, incomplete, keyParts, primaryKeys, elemValue).
In `@cache/query.go`:
- Line 27: The code hard-wires a 30s background context via the
singleflightContextTimeout constant which drops caller context values and
cancels/overrides caller deadlines; update the singleflight leader path to
derive a timeout from the caller's context instead of using
context.Background(): where you currently create a new context with
context.Background() (refer to singleflightContextTimeout and the leader path
code around the singleflight.Do/leader invocation at lines ~120-123), replace
that with ctx (the function's incoming context) and use context.WithTimeout(ctx,
singleflightContextTimeout) so caller deadlines/cancellation and context values
are preserved (and ensure you cancel the returned cancel func).
- Around line 422-423: The log call at cache.Logger.CtxInfo is currently
printing the full uniqueKvs payload (which may include serialized sensitive
objects); remove or redact the payload from logs and replace it with a safe
summary before calling BatchSetUniqueCache. Specifically, change the code around
cache.Logger.CtxInfo and the subsequent call to BatchSetUniqueCache so that you
log only non-sensitive metadata (e.g., tableName, indexName, count of uniqueKvs,
and a hash or truncated fingerprint) instead of the full uniqueKvs contents;
ensure the BatchSetUniqueCache call remains unchanged except for using the
redacted summary in the log.
In `@go.mod`:
- Line 69: The go.mod entry for golang.org/x/crypto should be upgraded from
v0.31.0 to a patched release to remove known vulnerabilities: update the module
requirement for golang.org/x/crypto to a non-vulnerable version (use the latest
patched release), then run go get golang.org/x/crypto@latest (or specify the
patched tag), run go mod tidy to update the lockfile, and run the test suite and
dependency scanner to verify no regressions; locate the golang.org/x/crypto line
in go.mod to make the change and ensure all references compile against the
updated package.
In `@util/key.go`:
- Around line 24-33: The current GenPrimaryCacheKey builds composite keys by
joining raw primaryKeyValues with ":" which can collide (e.g. ["a:b","c"] vs
["a","b:c"]); fix by encoding each primaryKeyValues element to a delimiter-safe
form (for example base64 URL-safe or percent-encoding) before joining, then
format the final key as before; apply the same change to the analogous index/key
functions shown later (e.g., GenIndexCacheKey or any function at the other
reported region) so all composite-key builders consistently encode components
prior to joining.
---
Outside diff comments:
In `@cache/query.go`:
- Around line 141-143: The cache reads are using the original ctx instead of the
leader request context, which can cause failures when the original ctx is
canceled; update all cache read calls (e.g., cache.BatchGetPrimaryCache and
their error logging via cache.Logger.CtxError) to use the leaderCtx variable
instead of ctx (same change also required for the other occurrences noted around
the other cache read sites), and ensure the CtxError calls pass leaderCtx so
logs and reads consistently use the leader request context.
---
Nitpick comments:
In `@cache/cache.go`:
- Around line 192-199: BatchSetUniqueCache currently mutates the caller's kvs
slice by rewriting kvs[idx].Key using util.GenUniqueCacheKey; to fix, either
clearly document this in the function comment (on
Gorm2Cache.BatchSetUniqueCache) that the input slice is mutated, or avoid the
side-effect by creating a new slice (e.g., make([]util.Kv, len(kvs)) and copy
each element while setting newKey via util.GenUniqueCacheKey) and pass that new
slice to c.cache.BatchSetKeys(ctx, newSlice); ensure references to
util.GenUniqueCacheKey and c.cache.BatchSetKeys remain unchanged.
In `@cache/dockertest_integration_test.go`:
- Around line 126-136: The table DROP and AutoMigrate steps in setupMySQL (the
calls to mysqlDB.Exec("DROP TABLE IF EXISTS ...") and mysqlDB.AutoMigrate(...))
run on every invocation and can race when tests run concurrently; move these
schema setup/cleanup steps into the same sync.Once block that initializes the
Docker container (or protect them with a package-level mutex) so schema changes
run only once during test setup; update the setupMySQL function to perform only
per-call DB handle retrieval while the sync.Once-initialized block handles DROP
TABLE and AutoMigrate (referencing setupMySQL, the DROP TABLE Exec call, and
AutoMigrate call to locate the changes).
- Around line 628-629: Replace fragile time.Sleep(...) uses in
dockertest_integration_test.go with a deterministic polling/synchronization
helper: remove direct calls to time.Sleep(50-200*time.Millisecond) and instead
implement and call a helper like waitForCondition or waitForCacheConsistency
that accepts a short interval, overall timeout, and a predicate function that
checks the cache state (e.g., presence/absence of key or expected value). Update
every occurrence (the lines with time.Sleep and tests that rely on cache
invalidation) to call this helper and fail the test if the predicate does not
become true within the timeout; if the cache library exposes hooks/callbacks,
call those instead of polling. Ensure the helper is reused across all listed
locations to avoid flaky CI timing assumptions.
In `@cache/helpers_test.go`:
- Around line 378-414: Update TestGetAllUniqueIndexes to assert real behavior by
constructing a schema that contains at least one unique index (instead of
relying on nil/placeholder and expecting 0). Create a schema.Schema instance
with its Indexes populated (either by calling ParseIndexes with a minimal valid
schema input or by directly setting the Schema.Indexes to include a unique index
entry), then call getAllUniqueIndexes and assert the returned slice length
matches the expectedCount and that expectedNames (the unique index names) are
present; reference TestGetAllUniqueIndexes, getAllUniqueIndexes, schema.Schema
and ParseIndexes to locate where to change the test.
In `@cache/helpers.go`:
- Around line 201-237: Both getUniqueIndexFields and getAllUniqueIndexes call
s.ParseIndexes() causing duplicate work; change the design to parse once and
reuse the result by either (A) adding an overload/helper like parseIndexesOnce(s
*schema.Schema) that caches the parsed result on the schema or in a
package-level cache, or (B) change signatures to accept the parsed indexes
(e.g., pass the slice/map returned by s.ParseIndexes() into getUniqueIndexFields
and getAllUniqueIndexes) so callers compute s.ParseIndexes() once and pass it
through; update callers to use the new helper/signature and ensure thread-safety
if you introduce caching.
In `@util/key_test.go`:
- Around line 83-84: The map-based key tests in util/key_test.go use Contains
for multi-field cases which allows nondeterministic ordering to pass; update
those assertions to assert exact equality of the full generated key string
(e.g., replace Contains checks with equality assertions comparing to
"gormcache:test123:p:users:1:2") for the cases around the shown diff and also in
the other affected ranges (96-120, 389-390, 402-426), ensuring the test
functions that build keys (the map-based key test functions in this file)
compare the complete expected key string rather than using Contains.
ℹ️ Review info
Configuration used: defaults
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
go.sumis excluded by!**/*.sum
📒 Files selected for processing (12)
cache/afterDelete.gocache/afterUpdate.gocache/cache.gocache/dockertest_integration_test.gocache/helpers.gocache/helpers_integration_test.gocache/helpers_test.gocache/query.gocache/singleflight.gogo.modutil/key.goutil/key_test.go
💤 Files with no reviewable changes (1)
- cache/singleflight.go
| func GenPrimaryCacheKey(instanceId string, tableName string, primaryKeyValues ...string) string { | ||
| var key string | ||
| if len(primaryKeyValues) == 1 { | ||
| // 单个参数,直接使用(可能是单个主键值,也可能是已经连接好的联合主键) | ||
| key = primaryKeyValues[0] | ||
| } else { | ||
| // 多个参数,用":"连接 | ||
| key = strings.Join(primaryKeyValues, ":") | ||
| } | ||
| return fmt.Sprintf("%s:%s:p:%s:%s", GormCachePrefix, instanceId, tableName, key) |
There was a problem hiding this comment.
Avoid ambiguous ":" concatenation for composite key values.
This format can collide (example: ["a:b","c"] vs ["a","b:c"]), which risks wrong cache hits/invalidations.
🔧 Proposed fix
import (
+ "encoding/base64"
"fmt"
"math/rand"
"reflect"
"sort"
"strings"
)
+
+func joinEscapedKeyParts(parts ...string) string {
+ encoded := make([]string, len(parts))
+ for i, p := range parts {
+ encoded[i] = base64.RawURLEncoding.EncodeToString([]byte(p))
+ }
+ return strings.Join(encoded, ":")
+}
func GenPrimaryCacheKey(instanceId string, tableName string, primaryKeyValues ...string) string {
- var key string
- if len(primaryKeyValues) == 1 {
- key = primaryKeyValues[0]
- } else {
- key = strings.Join(primaryKeyValues, ":")
- }
+ key := joinEscapedKeyParts(primaryKeyValues...)
return fmt.Sprintf("%s:%s:p:%s:%s", GormCachePrefix, instanceId, tableName, key)
}
func GenUniqueCacheKey(instanceId string, tableName string, uniqueIndexName string, uniqueKeyValues ...string) string {
- var key string
- if len(uniqueKeyValues) == 1 {
- key = uniqueKeyValues[0]
- } else {
- key = strings.Join(uniqueKeyValues, ":")
- }
+ key := joinEscapedKeyParts(uniqueKeyValues...)
return fmt.Sprintf("%s:%s:u:%s:%s:%s", GormCachePrefix, instanceId, tableName, uniqueIndexName, key)
}Also applies to: 58-67
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@util/key.go` around lines 24 - 33, The current GenPrimaryCacheKey builds
composite keys by joining raw primaryKeyValues with ":" which can collide (e.g.
["a:b","c"] vs ["a","b:c"]); fix by encoding each primaryKeyValues element to a
delimiter-safe form (for example base64 URL-safe or percent-encoding) before
joining, then format the final key as before; apply the same change to the
analogous index/key functions shown later (e.g., GenIndexCacheKey or any
function at the other reported region) so all composite-key builders
consistently encode components prior to joining.
…collision - query: use db.Statement.Context for singleflight leader cache reads (BatchGetPrimaryCache, BatchGetUniqueCache, GetSearchCache) so leader cancellation does not affect cache lookups. - query: log unique cache write count only; avoid logging full uniqueKvs payload. - cache: document that BatchSetUniqueCache mutates the caller's kvs slice. - helpers: composite primary/unique keys use Cartesian product (generateCartesianProduct) so e.g. user_id=1, role_id IN (1,2,3) yields ["1:1","1:2","1:3"]; add TestGetPrimaryKeysFromWhereClause_CompositeKeyCartesianProduct. - helpers: getObjectsAfterLoad composite key: treat nil ValueOf and zero values as invalid and skip incomplete keys (valid flag, no partial key append). - util/key: encode composite key parts with base64.RawURLEncoding to avoid collision (e.g. "a:b","c" vs "a","b:c"); add TestGenPrimaryCacheKey_NoCollision; update key_test expected values for encoded output.
There was a problem hiding this comment.
Actionable comments posted: 2
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
cache/query.go (1)
432-434:⚠️ Potential issue | 🟠 MajorAsync cache writes can be canceled before completion.
When
AsyncWriteis true (Line 432),wg.Wait()is skipped, butfillCallAfterQuerystill callscancel()(Line 496). The goroutines writing search/primary/unique caches use this same context, so writes may be aborted nondeterministically.🔧 Suggested fix
@@ - if !cache.Config.AsyncWrite { - wg.Wait() - } + if cache.Config.AsyncWrite { + if cancelObj, hasCancel := db.InstanceGet("gorm:cache:query:single_flight_cancel"); hasCancel { + if cancel, ok := cancelObj.(context.CancelFunc); ok { + go func() { + wg.Wait() + cancel() + }() + // prevent immediate cancel in fillCallAfterQuery + db.InstanceSet("gorm:cache:query:single_flight_cancel", context.CancelFunc(func() {})) + } + } + } else { + wg.Wait() + } @@ - if cancelObj, hasCancel := db.InstanceGet("gorm:cache:query:single_flight_cancel"); hasCancel { - if cancel, ok := cancelObj.(context.CancelFunc); ok { - cancel() - } - } + if !h.cache.Config.AsyncWrite || db.Error != nil { + if cancelObj, hasCancel := db.InstanceGet("gorm:cache:query:single_flight_cancel"); hasCancel { + if cancel, ok := cancelObj.(context.CancelFunc); ok { + cancel() + } + } + }Also applies to: 494-498
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cache/query.go` around lines 432 - 434, fillCallAfterQuery currently calls cancel() on the shared context which aborts the cache-write goroutines when cache.Config.AsyncWrite is true and wg.Wait() is skipped; update fillCallAfterQuery to avoid canceling the context used by the async writers: either derive a separate, non-cancelled context for those goroutines (e.g., use context.Background() or a child context created with context.WithCancel that you only cancel after the writers finish), or move cancel() to execute only after wg.Wait() when AsyncWrite is false; locate fillCallAfterQuery and the cancel() call and ensure the goroutines that perform search/primary/unique cache writes use a safe context that won't be canceled prematurely when cache.Config.AsyncWrite == true.
🧹 Nitpick comments (1)
cache/cache.go (1)
151-157: Avoid mutating caller-providedkvsin batch setters.Both setters rewrite
kvs[idx].Keyin place. Reusing the same slice across retries/calls can produce double-prefixed keys and hard-to-debug behavior.♻️ Suggested refactor
func (c *Gorm2Cache) BatchSetUniqueCache(ctx context.Context, tableName string, uniqueIndexName string, kvs []util.Kv) error { - for idx, kv := range kvs { - // kv.Key 已经是最终格式(单个值或已用":"连接的联合unique键),直接传入 - kvs[idx].Key = util.GenUniqueCacheKey(c.InstanceId, tableName, uniqueIndexName, kv.Key) - } - return c.cache.BatchSetKeys(ctx, kvs) + cacheKvs := make([]util.Kv, 0, len(kvs)) + for _, kv := range kvs { + cacheKvs = append(cacheKvs, util.Kv{ + Key: util.GenUniqueCacheKey(c.InstanceId, tableName, uniqueIndexName, kv.Key), + Value: kv.Value, + }) + } + return c.cache.BatchSetKeys(ctx, cacheKvs) }Also applies to: 194-200
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cache/cache.go` around lines 151 - 157, The BatchSetPrimaryKeyCache function currently mutates the caller-provided kvs slice by overwriting kvs[idx].Key which can cause double-prefixing on retries; instead, build a new slice of util.Kv (or copy each util.Kv) and set the generated key via util.GenPrimaryCacheKey on the copies, then pass that new slice to c.cache.BatchSetKeys; apply the same non-mutating fix to the other batch setter referenced (the method around lines 194-200) so neither Gorm2Cache.BatchSetPrimaryKeyCache nor the other batch setter mutates input slices.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@cache/helpers.go`:
- Around line 545-546: The composite-key assembly uses strings.Join(keyParts,
":") which allows collisions (e.g., ["a:b","c"] vs ["a","b:c"]); fix by
canonical-encoding each key part before joining (for example base64 URL-safe or
percent-encoding) or use length-prefixed encoding for each part, then join the
encoded parts with a separator; update every place that currently calls
strings.Join(keyParts, ":") (the occurrences building primaryKeys/secondary
keys) so they encode keyParts first (keep the separator constant) and ensure any
code that parses/compares these keys decodes or consistently uses the same
encoding function.
- Around line 612-619: The generateCartesianProduct code multiplies n by len(s)
across valueSlices and preallocates result without any bound, which can blow up
memory; fix it by checking for integer overflow and enforcing a configurable
maximum product threshold before doing n *= len(s) (e.g., check if n >
maxAllowed/len(s) or if multiplication would overflow), and if the threshold is
exceeded return an error or use a streaming/iterator fallback instead of
preallocating result; update the logic around valueSlices, n, and result in
generateCartesianProduct to perform these guards and fail fast when the
Cartesian product would be too large.
---
Outside diff comments:
In `@cache/query.go`:
- Around line 432-434: fillCallAfterQuery currently calls cancel() on the shared
context which aborts the cache-write goroutines when cache.Config.AsyncWrite is
true and wg.Wait() is skipped; update fillCallAfterQuery to avoid canceling the
context used by the async writers: either derive a separate, non-cancelled
context for those goroutines (e.g., use context.Background() or a child context
created with context.WithCancel that you only cancel after the writers finish),
or move cancel() to execute only after wg.Wait() when AsyncWrite is false;
locate fillCallAfterQuery and the cancel() call and ensure the goroutines that
perform search/primary/unique cache writes use a safe context that won't be
canceled prematurely when cache.Config.AsyncWrite == true.
---
Nitpick comments:
In `@cache/cache.go`:
- Around line 151-157: The BatchSetPrimaryKeyCache function currently mutates
the caller-provided kvs slice by overwriting kvs[idx].Key which can cause
double-prefixing on retries; instead, build a new slice of util.Kv (or copy each
util.Kv) and set the generated key via util.GenPrimaryCacheKey on the copies,
then pass that new slice to c.cache.BatchSetKeys; apply the same non-mutating
fix to the other batch setter referenced (the method around lines 194-200) so
neither Gorm2Cache.BatchSetPrimaryKeyCache nor the other batch setter mutates
input slices.
ℹ️ Review info
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (6)
cache/cache.gocache/helpers.gocache/helpers_test.gocache/query.goutil/key.goutil/key_test.go
🚧 Files skipped from review as they are similar to previous changes (1)
- cache/helpers_test.go
| primaryKeys = append(primaryKeys, strings.Join(keyParts, ":")) | ||
| } |
There was a problem hiding this comment.
Composite-key delimiter collisions are still possible.
Line 545, Line 592, and Line 626 still join raw parts with ":". Values like ["a:b","c"] and ["a","b:c"] collapse to the same key, which can cause wrong cache hits/invalidations.
🔐 Suggested fix (canonical encoded join)
+import "encoding/base64"
+
+func joinEncodedParts(parts []string, sep string) string {
+ encoded := make([]string, len(parts))
+ for i, p := range parts {
+ encoded[i] = base64.RawURLEncoding.EncodeToString([]byte(p))
+ }
+ return strings.Join(encoded, sep)
+}
@@
- primaryKeys = append(primaryKeys, strings.Join(keyParts, ":"))
+ primaryKeys = append(primaryKeys, joinEncodedParts(keyParts, ":"))
@@
- indexKeys[objIdx] = strings.Join(keyParts, ":")
+ indexKeys[objIdx] = joinEncodedParts(keyParts, ":")
@@
- result = append(result, strings.Join(parts, sep))
+ result = append(result, joinEncodedParts(parts, sep))Also applies to: 592-593, 626-626
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@cache/helpers.go` around lines 545 - 546, The composite-key assembly uses
strings.Join(keyParts, ":") which allows collisions (e.g., ["a:b","c"] vs
["a","b:c"]); fix by canonical-encoding each key part before joining (for
example base64 URL-safe or percent-encoding) or use length-prefixed encoding for
each part, then join the encoded parts with a separator; update every place that
currently calls strings.Join(keyParts, ":") (the occurrences building
primaryKeys/secondary keys) so they encode keyParts first (keep the separator
constant) and ensure any code that parses/compares these keys decodes or
consistently uses the same encoding function.
| n := 1 | ||
| for _, s := range valueSlices { | ||
| if len(s) == 0 { | ||
| return nil | ||
| } | ||
| n *= len(s) | ||
| } | ||
| result := make([]string, 0, n) |
There was a problem hiding this comment.
generateCartesianProduct is unbounded and can blow up memory on large IN sets.
n *= len(s) has no upper bound before preallocating result capacity. A query with multiple large IN lists can allocate massive memory on the request path.
🛡️ Suggested guard
func generateCartesianProduct(valueSlices [][]string, sep string) []string {
@@
- n := 1
+ const maxCartesianKeys = 10000
+ n := 1
for _, s := range valueSlices {
if len(s) == 0 {
return nil
}
+ if n > maxCartesianKeys/len(s) {
+ return nil // skip cache path instead of risking OOM
+ }
n *= len(s)
}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@cache/helpers.go` around lines 612 - 619, The generateCartesianProduct code
multiplies n by len(s) across valueSlices and preallocates result without any
bound, which can blow up memory; fix it by checking for integer overflow and
enforcing a configurable maximum product threshold before doing n *= len(s)
(e.g., check if n > maxAllowed/len(s) or if multiplication would overflow), and
if the threshold is exceeded return an error or use a streaming/iterator
fallback instead of preallocating result; update the logic around valueSlices,
n, and result in generateCartesianProduct to perform these guards and fail fast
when the Cartesian product would be too large.
…g, single ParseIndexes, crypto upgrade - helpers_test: TestGetAllUniqueIndexes now asserts real behavior using schema.Parse on a model with uniqueIndex tags (idx_email, idx_username). - dockertest: capture MySQL/PostgreSQL setup errors in package vars and t.Fatalf after sync.Once.Do so later tests see failures instead of nil DB. - dockertest: add waitForCondition(interval, timeout, predicate) and TestWaitForCondition; replace fixed time.Sleep with polling in UpdateCompositeKey, Delete, UpdateUniqueKey, concurrent update, and DeleteAndRecreate for cache consistency. - helpers: getUniqueKeysFromWhereClause returns (keys, indexes) and hasOtherClauseExceptUniqueField accepts optional *schema.Index to avoid duplicate ParseIndexes; query uses returned indexes, afterDelete/afterUpdate use keys only. - go.mod: upgrade golang.org/x/crypto to latest (v0.48.0) for known vulnerability fixes; go mod tidy.
There was a problem hiding this comment.
Actionable comments posted: 2
♻️ Duplicate comments (5)
cache/query.go (1)
121-123:⚠️ Potential issue | 🟠 MajorDetach cancellation without dropping caller context values.
At Line 121,
context.Background()discards request-scoped values (trace IDs, correlation metadata). Usecontext.WithoutCancel(ctx)as parent, then apply timeout.Proposed fix
- bgCtx, cancel := context.WithTimeout(context.Background(), singleflightContextTimeout) + leaderParent := context.Background() + if ctx != nil { + leaderParent = context.WithoutCancel(ctx) + } + bgCtx, cancel := context.WithTimeout(leaderParent, singleflightContextTimeout) db.Statement.Context = bgCtxIn Go 1.24, does context.WithoutCancel preserve parent context values while preventing cancellation/deadline propagation?🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cache/query.go` around lines 121 - 123, Replace the background parent with a non-canceling derivative of the caller context so caller-scoped values are preserved: create a parent via context.WithoutCancel(ctx) and then call context.WithTimeout on that parent to get bgCtx and cancel; set db.Statement.Context = bgCtx and db.InstanceSet("gorm:cache:query:single_flight_cancel", cancel) as before (use the existing variables bgCtx, cancel, singleflightContextTimeout and the caller ctx variable).cache/afterUpdate.go (1)
74-87:⚠️ Potential issue | 🟠 MajorSchema-less updates can skip fallback unique-cache invalidation.
At Line 75, fallback invalidation is gated by
db.Statement.Schema != nil. For schema-less update statements, this path is skipped and unique caches may stay stale.Proposed fix
- } else { - // 如果没有从WHERE子句提取到unique键,失效所有unique键缓存 - if db.Statement.Schema != nil { - allUniqueIndexes := getAllUniqueIndexes(db.Statement.Schema) - for indexName := range allUniqueIndexes { + } else { + // 如果没有从WHERE子句提取到unique键,失效所有unique键缓存 + s := db.Statement.Schema + if s == nil && db.Statement.Model != nil { + stmt := &gorm.Statement{DB: db} + if err := stmt.Parse(db.Statement.Model); err == nil { + s = stmt.Schema + } + } + if s != nil { + allUniqueIndexes := getAllUniqueIndexes(s) + for indexName := range allUniqueIndexes { cache.Logger.CtxInfo(ctx, "[AfterUpdate] now start to invalidate all unique cache for index %s", indexName) err := cache.InvalidateAllUniqueCache(ctx, tableName, indexName)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cache/afterUpdate.go` around lines 74 - 87, The fallback unique-cache invalidation is skipped when db.Statement.Schema == nil, which leaves unique caches stale for schema-less updates; remove or relax the schema check so getAllUniqueIndexes is invoked even for schema-less statements (or add an alternate codepath) and call cache.InvalidateAllUniqueCache(ctx, tableName, indexName) for each index; update the logic around db.Statement.Schema, getAllUniqueIndexes, InvalidateAllUniqueCache and the surrounding AfterUpdate block to ensure unique-cache invalidation runs for schema-less updates (preserve existing logging via cache.Logger.CtxInfo/CtxError).cache/helpers.go (2)
555-556:⚠️ Potential issue | 🟠 MajorComposite-key collision risk remains with raw
":"joins.Line 555, Line 602, and Line 636 still join unescaped parts. Values like
["a:b","c"]and["a","b:c"]collapse to the same key and can cause wrong cache hits/invalidations.Proposed fix
import ( "context" + "encoding/base64" "fmt" @@ ) + +func joinEncodedParts(parts []string, sep string) string { + encoded := make([]string, len(parts)) + for i, p := range parts { + encoded[i] = base64.RawURLEncoding.EncodeToString([]byte(p)) + } + return strings.Join(encoded, sep) +} @@ - primaryKeys = append(primaryKeys, strings.Join(keyParts, ":")) + primaryKeys = append(primaryKeys, joinEncodedParts(keyParts, ":")) @@ - indexKeys[objIdx] = strings.Join(keyParts, ":") + indexKeys[objIdx] = joinEncodedParts(keyParts, ":") @@ - result = append(result, strings.Join(parts, sep)) + result = append(result, joinEncodedParts(parts, sep))Also applies to: 602-603, 636-636
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cache/helpers.go` around lines 555 - 556, The composite-key collision exists because parts in keyParts are joined with a raw ":" (e.g., primaryKeys = append(primaryKeys, strings.Join(keyParts, ":"))) causing collisions for inputs like ["a:b","c"] vs ["a","b:c"]; fix by encoding or escaping each part before joining — for example add a helper (e.g., safeEncodePart or escapeKeyPart) that deterministically encodes each key part (URL-safe base64, hex, or length-prefix) and replace strings.Join(keyParts, ":") with strings.Join(encodedParts, ":"); update all occurrences that build keys from keyParts / primaryKeys (the append lines and the similar joins at the other noted spots) to use this helper so joined composite keys are collision-free.
622-629:⚠️ Potential issue | 🟠 MajorBound Cartesian-product size before preallocation.
Line 622-Line 629 multiplies product size without a cap. Large
INcombinations can allocate excessive memory on the query path.Proposed fix
func generateCartesianProduct(valueSlices [][]string, sep string) []string { if len(valueSlices) == 0 { return nil } - n := 1 + const maxCartesianKeys = 10000 + n := 1 for _, s := range valueSlices { if len(s) == 0 { return nil } + if n > maxCartesianKeys/len(s) { + return nil + } n *= len(s) }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cache/helpers.go` around lines 622 - 629, The code multiplies lengths of valueSlices into n and preallocates result := make([]string, 0, n) which can overflow or allocate huge memory; introduce a safe cap (e.g. maxCartesianProductSize) and bound the multiplication: in the loop over valueSlices use a check like if n > maxCartesianProductSize/len(s) { n = maxCartesianProductSize; break } else n *= len(s); keep the existing empty-slice early return, then ensure before make(...) you cast/limit n to int and clamp n = min(n, maxCartesianProductSize) to avoid excessive allocations; reference the variables valueSlices, n and result when making these changes.cache/afterDelete.go (1)
72-85:⚠️ Potential issue | 🟠 MajorSchema-less deletes can skip fallback unique-cache invalidation.
Line 73 only invalidates all unique indexes when
db.Statement.Schemaexists. Schema-less delete statements can leave unique caches stale.Proposed fix
- } else { - // 如果没有从WHERE子句提取到unique键,失效所有unique键缓存 - if db.Statement.Schema != nil { - allUniqueIndexes := getAllUniqueIndexes(db.Statement.Schema) - for indexName := range allUniqueIndexes { + } else { + // 如果没有从WHERE子句提取到unique键,失效所有unique键缓存 + s := db.Statement.Schema + if s == nil && db.Statement.Model != nil { + stmt := &gorm.Statement{DB: db} + if err := stmt.Parse(db.Statement.Model); err == nil { + s = stmt.Schema + } + } + if s != nil { + allUniqueIndexes := getAllUniqueIndexes(s) + for indexName := range allUniqueIndexes { cache.Logger.CtxInfo(ctx, "[AfterDelete] now start to invalidate all unique cache for index %s", indexName) err := cache.InvalidateAllUniqueCache(ctx, tableName, indexName)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cache/afterDelete.go` around lines 72 - 85, The code currently skips invalidating all unique caches when db.Statement.Schema is nil; change the logic so unique-cache invalidation always runs: either remove the db.Statement.Schema nil-check around the getAllUniqueIndexes call or, when Schema is nil, fetch the table schema/index list via a fallback (e.g., a new helper that derives indexes from tableName or reads the schema from DB/cached metadata) and then call getAllUniqueIndexes (or an equivalent function) and cache.InvalidateAllUniqueCache for each indexName; update AfterDelete to always iterate indexes and call cache.InvalidateAllUniqueCache (logging via cache.Logger.CtxInfo/CtxError) so schema-less deletes don’t leave unique caches stale.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@cache/dockertest_integration_test.go`:
- Around line 362-368: The failure reporting for the query into result4 can
panic because result4[0].Name is accessed unconditionally; modify the test after
the db.Where(...).Find(&result4) call to avoid indexing when the slice length is
unexpected: first check len(result4) and in the error path include the actual
length (and optionally the entire result4 contents) in the t.Errorf message
instead of accessing result4[0].Name so the test reports a safe, informative
failure without panicking.
- Around line 857-864: The goroutine's concurrent updates drop the returned
error from db.Model(&UserRole{}).Where(...).Update("name", newName), so failed
writes are ignored; capture the result of Update(...).Error inside the anonymous
function (in the goroutine) and assert or fail the test (e.g., t.Fatalf/t.Errorf
or send an error to a channel) when error != nil so the test surface shows write
failures; reference the anonymous goroutine,
db.Model(&UserRole{}).Where(...).Update, and the done channel to locate and
modify the code.
---
Duplicate comments:
In `@cache/afterDelete.go`:
- Around line 72-85: The code currently skips invalidating all unique caches
when db.Statement.Schema is nil; change the logic so unique-cache invalidation
always runs: either remove the db.Statement.Schema nil-check around the
getAllUniqueIndexes call or, when Schema is nil, fetch the table schema/index
list via a fallback (e.g., a new helper that derives indexes from tableName or
reads the schema from DB/cached metadata) and then call getAllUniqueIndexes (or
an equivalent function) and cache.InvalidateAllUniqueCache for each indexName;
update AfterDelete to always iterate indexes and call
cache.InvalidateAllUniqueCache (logging via cache.Logger.CtxInfo/CtxError) so
schema-less deletes don’t leave unique caches stale.
In `@cache/afterUpdate.go`:
- Around line 74-87: The fallback unique-cache invalidation is skipped when
db.Statement.Schema == nil, which leaves unique caches stale for schema-less
updates; remove or relax the schema check so getAllUniqueIndexes is invoked even
for schema-less statements (or add an alternate codepath) and call
cache.InvalidateAllUniqueCache(ctx, tableName, indexName) for each index; update
the logic around db.Statement.Schema, getAllUniqueIndexes,
InvalidateAllUniqueCache and the surrounding AfterUpdate block to ensure
unique-cache invalidation runs for schema-less updates (preserve existing
logging via cache.Logger.CtxInfo/CtxError).
In `@cache/helpers.go`:
- Around line 555-556: The composite-key collision exists because parts in
keyParts are joined with a raw ":" (e.g., primaryKeys = append(primaryKeys,
strings.Join(keyParts, ":"))) causing collisions for inputs like ["a:b","c"] vs
["a","b:c"]; fix by encoding or escaping each part before joining — for example
add a helper (e.g., safeEncodePart or escapeKeyPart) that deterministically
encodes each key part (URL-safe base64, hex, or length-prefix) and replace
strings.Join(keyParts, ":") with strings.Join(encodedParts, ":"); update all
occurrences that build keys from keyParts / primaryKeys (the append lines and
the similar joins at the other noted spots) to use this helper so joined
composite keys are collision-free.
- Around line 622-629: The code multiplies lengths of valueSlices into n and
preallocates result := make([]string, 0, n) which can overflow or allocate huge
memory; introduce a safe cap (e.g. maxCartesianProductSize) and bound the
multiplication: in the loop over valueSlices use a check like if n >
maxCartesianProductSize/len(s) { n = maxCartesianProductSize; break } else n *=
len(s); keep the existing empty-slice early return, then ensure before make(...)
you cast/limit n to int and clamp n = min(n, maxCartesianProductSize) to avoid
excessive allocations; reference the variables valueSlices, n and result when
making these changes.
In `@cache/query.go`:
- Around line 121-123: Replace the background parent with a non-canceling
derivative of the caller context so caller-scoped values are preserved: create a
parent via context.WithoutCancel(ctx) and then call context.WithTimeout on that
parent to get bgCtx and cancel; set db.Statement.Context = bgCtx and
db.InstanceSet("gorm:cache:query:single_flight_cancel", cancel) as before (use
the existing variables bgCtx, cancel, singleflightContextTimeout and the caller
ctx variable).
ℹ️ Review info
Configuration used: defaults
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
go.sumis excluded by!**/*.sum
📒 Files selected for processing (7)
cache/afterDelete.gocache/afterUpdate.gocache/dockertest_integration_test.gocache/helpers.gocache/helpers_test.gocache/query.gogo.mod
- dockertest: check len(result4) before using result4[0].Name so test failure reporting does not panic when the query returns an unexpected number of rows. - dockertest: capture Update(...).Error in concurrent update goroutines, send errors on errCh, and t.Fatalf on any error after goroutines finish so failed writes fail the test.
There was a problem hiding this comment.
Actionable comments posted: 4
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@cache/dockertest_integration_test.go`:
- Line 677: The cache-priming queries (e.g., the
db.Where(...).First(&UserRole{}) calls) currently ignore errors and can leave
the cache unpopulated; update each warm-up query to inspect the returned
result's Error and fail the test immediately on error (using t.Fatalf or
require.NoError) so a failed priming query doesn't lead to misleading test
results — locate the db.Where(...).First(&UserRole{}) calls and other similar
warm-up queries and add error checks that surface any non-nil result.Error.
- Line 136: The DROP TABLE Exec calls currently ignore errors; change both Exec
calls (the mysqlDB.Exec("DROP TABLE IF EXISTS user_roles, users, user_sessions")
instances) to capture the returned *gorm.DB (e.g., res := mysqlDB.Exec(...)) and
check res.Error, failing the test or logging the error (t.Fatalf or
require.NoError) if non-nil so schema-reset failures don't silently pass and
cause flaky tests.
- Line 768: Replace direct equality checks against gorm.ErrRecordNotFound with
errors.Is to properly handle wrapped errors: import the "errors" package and
change the predicate in the function containing the line returning err != nil &&
err == gorm.ErrRecordNotFound to use errors.Is(err, gorm.ErrRecordNotFound);
update the conditional in the same scope that reads "} else if err !=
gorm.ErrRecordNotFound {" to "} else if !errors.Is(err, gorm.ErrRecordNotFound)
{"; and change the return that currently does return db.Where("user_id = ? AND
role_id = ?", 2000, 3000).First(&r).Error == gorm.ErrRecordNotFound to return
errors.Is(db.Where("user_id = ? AND role_id = ?", 2000, 3000).First(&r).Error,
gorm.ErrRecordNotFound). Keep the rest of the logic intact.
- Line 317: The call to db.Use(cache) (where cache is a Gorm2Cache plugin using
util.GormCachePrefix) returns an error that is currently ignored; update each
occurrence (e.g., the calls at db.Use(cache) in dockertest_integration_test.go)
to check the returned error and handle it (fail the test with t.Fatalf or t.Skip
as appropriate) so repeated plugin registration errors are
surfaced—specifically, capture the error from db.Use(cache), and if err != nil,
call t.Fatalf("db.Use(cache) failed: %v", err) (or an equivalent test-failing
handler) so tests do not proceed silently without the plugin registered.
- Enhanced error handling in MySQL and PostgreSQL table cleanup by checking for execution errors and reporting them with t.Fatalf. - Updated cache plugin registration in various tests to capture and report errors, ensuring test failures are clearly communicated. - Improved error checking for cache warming queries to provide more informative failure messages.
There was a problem hiding this comment.
Actionable comments posted: 1
♻️ Duplicate comments (3)
cache/dockertest_integration_test.go (3)
625-625:⚠️ Potential issue | 🟠 MajorCheck
db.Use(cache)error in this test too.Line 625 is the one remaining
db.Use(cache)call that ignores the returned error, so this test can proceed without the plugin actually attached.Fix
- db.Use(cache) + if err := db.Use(cache); err != nil { + t.Fatalf("failed to register cache plugin: %v", err) + }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cache/dockertest_integration_test.go` at line 625, The call to db.Use(cache) in the test currently ignores the returned error so the plugin may not be attached; update the test to capture and check the error from db.Use(cache) (e.g. err := db.Use(cache)) and fail the test when err != nil using the test helper in this package (for example t.Fatal/t.Fatalf or require.NoError depending on the file's test style) so the test cannot continue when the plugin attachment fails.
854-854:⚠️ Potential issue | 🟠 MajorWarm-up query error is still ignored.
Line 854 drops query errors; if priming fails, the test can report misleading cache-consistency results.
Fix
- db.Where("user_id = ?", 700).Find(&[]UserRole{}) + var warmup []UserRole + if err := db.Where("user_id = ?", 700).Find(&warmup).Error; err != nil { + t.Fatalf("failed to warm cache: %v", err) + }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cache/dockertest_integration_test.go` at line 854, The warm-up query using db.Where(...).Find(&[]UserRole{}) currently ignores the returned error; update the call to capture the result (e.g., res := db.Where("user_id = ?", 700).Find(&[]UserRole{})) and check res.Error, failing the test on error (use t.Fatalf or your test helper like require.NoError) so priming failures are surfaced; reference the db variable and the UserRole slice used in dockertest_integration_test.go to locate and fix the query.
63-73:⚠️ Potential issue | 🔴 CriticalShared DB instances conflict with per-test plugin registration.
Using
setupMySQLOnce/setupPGOncewith global*gorm.DBwhile callingdb.Use(cache)in many tests makes plugin registration state shared across tests. With GORM plugin uniqueness per DB instance, this creates order-dependent failures after the first registration. Consider creating a fresh*gorm.DBper test (container can still be singleton), or register plugin once and reset state between tests.Possible direction (fresh DB per test, singleton container)
- var mysqlDB *gorm.DB + // keep container/pool singleton, but open DB handle per test func setupMySQL(t *testing.T) *gorm.DB { setupMySQLOnce.Do(func() { - // start container + open mysqlDB once + // start container + set mysqlDSN once }) + db, err := gorm.Open(mysql.Open(mysqlDSN), &gorm.Config{ + Logger: logger.Default.LogMode(logger.Silent), + }) + if err != nil { + t.Fatalf("open MySQL DB failed: %v", err) + } + t.Cleanup(func() { + if sqlDB, err := db.DB(); err == nil && sqlDB != nil { + _ = sqlDB.Close() + } + }) - return mysqlDB + return db }Also applies to: 76-77, 150-151
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cache/dockertest_integration_test.go` around lines 63 - 73, The tests share global *gorm.DB instances (pgDB/mysql DB) created under setupPGOnce/setupMySQLOnce which causes gorm plugin registration (db.Use(cache)) to leak across tests; change the setup so the Docker container/pool (pgPool/mysql pool) remains singleton but create a fresh *gorm.DB per test (open a new gorm.DB inside each test or a per-test setup helper) instead of reusing pgDB/mysql global, or alternatively register the plugin once on a per-DB-template and reset plugin state between tests; update references to setupPGOnce/setupMySQLOnce and pgDB/mysql global vars so tests call a helper that returns a new *gorm.DB instance for each test run.
🧹 Nitpick comments (1)
cache/dockertest_integration_test.go (1)
819-820: PreferwaitForConditionover fixed sleeps for stability.These fixed sleeps can still be flaky under CI load. Since
waitForConditionalready exists in this file, using it here would make these scenarios more deterministic.Also applies to: 860-861, 1010-1010
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cache/dockertest_integration_test.go` around lines 819 - 820, Replace the fragile time.Sleep(50 * time.Millisecond) calls with the existing waitForCondition helper to make tests deterministic: locate each occurrence of time.Sleep(...) (including the other instances flagged) and call waitForCondition with a predicate that checks the specific post-condition/assertion you expect (e.g., container ready, cache populated) and a reasonable timeout/interval instead of sleeping; ensure you pass a descriptive failure message so the test fails clearly if the condition is not met.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@cache/dockertest_integration_test.go`:
- Around line 725-727: The code calls db.DB() and ignores its error (sqlDB, _ :=
db.DB()) before invoking sqlDB.QueryRow(), which can cause a panic; update each
occurrence (the sqlDB, _ := db.DB() sites that are followed by QueryRow calls)
to capture the error (sqlDB, err := db.DB()), check it (e.g., if err != nil {
t.Fatalf("db.DB() failed: %v", err) } or require.NoError(t, err)), and only then
call sqlDB.QueryRow(...), so the test fails cleanly when DB retrieval fails.
---
Duplicate comments:
In `@cache/dockertest_integration_test.go`:
- Line 625: The call to db.Use(cache) in the test currently ignores the returned
error so the plugin may not be attached; update the test to capture and check
the error from db.Use(cache) (e.g. err := db.Use(cache)) and fail the test when
err != nil using the test helper in this package (for example t.Fatal/t.Fatalf
or require.NoError depending on the file's test style) so the test cannot
continue when the plugin attachment fails.
- Line 854: The warm-up query using db.Where(...).Find(&[]UserRole{}) currently
ignores the returned error; update the call to capture the result (e.g., res :=
db.Where("user_id = ?", 700).Find(&[]UserRole{})) and check res.Error, failing
the test on error (use t.Fatalf or your test helper like require.NoError) so
priming failures are surfaced; reference the db variable and the UserRole slice
used in dockertest_integration_test.go to locate and fix the query.
- Around line 63-73: The tests share global *gorm.DB instances (pgDB/mysql DB)
created under setupPGOnce/setupMySQLOnce which causes gorm plugin registration
(db.Use(cache)) to leak across tests; change the setup so the Docker
container/pool (pgPool/mysql pool) remains singleton but create a fresh *gorm.DB
per test (open a new gorm.DB inside each test or a per-test setup helper)
instead of reusing pgDB/mysql global, or alternatively register the plugin once
on a per-DB-template and reset plugin state between tests; update references to
setupPGOnce/setupMySQLOnce and pgDB/mysql global vars so tests call a helper
that returns a new *gorm.DB instance for each test run.
---
Nitpick comments:
In `@cache/dockertest_integration_test.go`:
- Around line 819-820: Replace the fragile time.Sleep(50 * time.Millisecond)
calls with the existing waitForCondition helper to make tests deterministic:
locate each occurrence of time.Sleep(...) (including the other instances
flagged) and call waitForCondition with a predicate that checks the specific
post-condition/assertion you expect (e.g., container ready, cache populated) and
a reasonable timeout/interval instead of sleeping; ensure you pass a descriptive
failure message so the test fails clearly if the condition is not met.
…t integration tests - Removed unused database connection variables for MySQL and PostgreSQL. - Simplified database connection setup by directly using the connection returned from gorm.Open. - Enhanced cleanup logic to ensure proper closure of database connections after tests. - Improved error handling during database setup and migration processes for clearer test failure reporting.
…fterUpdate - Updated AfterDelete and AfterUpdate functions to log successful cache invalidation messages, enhancing traceability. - Refactored context cancellation handling in query operations to prevent premature termination of asynchronous writes, ensuring proper resource management.
- Refactored AfterDelete and AfterUpdate functions to improve unique key extraction from the database statement. - Added logic to handle cases where the schema is not directly available, ensuring all unique indexes are invalidated correctly. - Improved cache invalidation logging for better traceability during delete and update operations.
|
🎉 This PR is included in version 1.3.0 🎉 The release is available on GitHub release Your semantic-release bot 📦🚀 |
Summary by CodeRabbit
New Features
Bug Fixes
Tests
Chores