-
-
Notifications
You must be signed in to change notification settings - Fork 148
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add Benchmarks for VersionVector Performance Analysis #1150
Conversation
WalkthroughThis pull request refactors cleanup logic in benchmark tests by removing the custom Changes
Sequence Diagram(s)sequenceDiagram
participant BV as BenchmarkVersionVector
participant TS as TestServer
participant IC as Clients/Docs Initialization
participant VV as Benchmark Loop
BV->>TS: startTestServer(snapshotInterval, snapshotThreshold)
BV->>IC: initializeClientsAndDocs(n, docKey)
IC->>IC: [Loop] For each client: initializeClientAndDoc (activate & attach document)
BV->>VV: benchmarkVV(clientCnt)
VV->>Clients: Execute operations (activate, text updates, sync)
BV-->>TS: Shutdown test server (deferred)
Possibly related PRs
Suggested labels
Suggested reviewers
✨ Finishing Touches
Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (1)
test/bench/vv_bench_test.go (1)
96-198
: Consider adding memory usage metrics.The benchmark effectively measures operation times and data sizes. Based on the PR objectives, which mention significant memory usage differences between implementations (5GB vs 50GB), consider adding memory profiling metrics.
Add memory profiling using the runtime package:
+import "runtime" func benchmarkVV(clientCnt int, b *testing.B) { + var m runtime.MemStats for i := 0; i < b.N; i++ { + runtime.ReadMemStats(&m) + initialAlloc := m.Alloc // ... existing code ... + runtime.ReadMemStats(&m) + b.ReportMetric(float64(m.Alloc-initialAlloc), "memory_allocated_bytes") } }
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (3)
test/bench/grpc_bench_test.go
(2 hunks)test/bench/vv_bench_test.go
(1 hunks)test/helper/helper.go
(1 hunks)
⏰ Context from checks skipped due to timeout of 90000ms (1)
- GitHub Check: build
🔇 Additional comments (8)
test/bench/vv_bench_test.go (5)
43-56
: LGTM!The server initialization logic is well-structured with appropriate error handling for test setup.
58-60
: LGTM!The key generation logic ensures uniqueness by combining index and timestamp.
62-78
: LGTM!The initialization logic is well-structured with appropriate error handling using test assertions.
80-94
: LGTM!The function efficiently handles bulk initialization by reusing the single initialization logic.
200-225
: LGTM!The benchmark setup is comprehensive, covering various client counts as specified in the PR objectives. The server lifecycle is properly managed.
test/bench/grpc_bench_test.go (2)
192-192
: LGTM!Good refactoring to use the centralized cleanup helper.
258-258
: LGTM!Consistent use of the centralized cleanup helper.
test/helper/helper.go (1)
557-563
: LGTM!The helper function effectively centralizes the client cleanup logic with appropriate error handling.
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #1150 +/- ##
==========================================
- Coverage 38.59% 38.46% -0.13%
==========================================
Files 165 169 +4
Lines 25293 25433 +140
==========================================
+ Hits 9761 9783 +22
- Misses 14712 14829 +117
- Partials 820 821 +1 ☔ View full report in Codecov by Sentry. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (1)
test/bench/grpc_bench_test.go (1)
299-304
: LGTM! Consider cleanup for created project.Good changes to use a real project for benchmarking. However, consider cleaning up the created project after the benchmark completes to prevent resource leaks.
Add project cleanup in the defer block:
adminCli := helper.CreateAdminCli(b, defaultServer.RPCAddr()) -defer func() { adminCli.Close() }() +defer func() { + if err := adminCli.DeleteProject(ctx, project.ID.String()); err != nil { + b.Logf("Failed to delete project: %v", err) + } + adminCli.Close() +}()
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
test/bench/grpc_bench_test.go
(6 hunks)
⏰ Context from checks skipped due to timeout of 90000ms (2)
- GitHub Check: build
- GitHub Check: bench
🔇 Additional comments (4)
test/bench/grpc_bench_test.go (4)
28-28
: LGTM!Good practice using an alias for the time package to avoid potential naming conflicts.
94-107
: LGTM! Improved function reusability and name uniqueness.Good improvements:
- Function now accepts a project parameter instead of using hardcoded values
- Using timestamps for name generation reduces the chance of conflicts
192-192
: LGTM! Centralized cleanup logic.Good refactoring to use the centralized
helper.CleanupClients
function, promoting code reuse and consistency.
258-258
: LGTM! Consistent cleanup approach.Good use of the centralized cleanup function, maintaining consistency with other tests.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for your contribution.
What this PR does / why we need it:
Overview
Added benchmark tests to analyze performance implications after deciding to keep detached client's lamport in Version Vector implementation. The tests can be compared by running them in each branch:
Test Scenario
Key Findings
Performance Metrics
Total Operation Time
Memory Usage
ChangePack Size
Snapshot Size
Push-Pull Performance
These results suggest that while VV implementation provides certain benefits, it comes with significant performance trade-offs, particularly in large-scale deployments. The Lamport approach shows better scalability characteristics across all measured metrics.
Future Works
We plan to analyze and optimize Version Vector performance using pprof and Go trace tools to:
These analyses will help us understand and reduce the performance gap between VV and Lamport implementations.
Which issue(s) this PR fixes:
Addresses #1144
Special notes for your reviewer:
Does this PR introduce a user-facing change?:
Additional documentation:
Checklist:
Summary by CodeRabbit
Summary by CodeRabbit
New Features
Refactor
Chores
test/bench
directory.