-
Notifications
You must be signed in to change notification settings - Fork 57
docs: add bid engine documentation #295
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
|
""" WalkthroughThis update standardizes logging key names from Changes
Poem
📜 Recent review detailsConfiguration used: CodeRabbit UI 📒 Files selected for processing (1)
🚧 Files skipped from review as they are similar to previous changes (1)
⏰ Context from checks skipped due to timeout of 90000ms (3)
✨ Finishing Touches
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (4)
bidengine/config.go(1 hunks)bidengine/order.go(11 hunks)bidengine/service.go(4 hunks)cluster/inventory.go(2 hunks)
🧰 Additional context used
🪛 golangci-lint (1.64.8)
bidengine/order.go
35-35: configuraiton is a misspelling of configuration
(misspell)
🪛 GitHub Check: lint
bidengine/order.go
[failure] 35-35:
configuraiton is a misspelling of configuration (misspell)
⏰ Context from checks skipped due to timeout of 90000ms (3)
- GitHub Check: integration-tests / crd-e2e
- GitHub Check: release-dry-run
- GitHub Check: build-bins
🔇 Additional comments (8)
bidengine/config.go (1)
11-12: Excellent documentation improvement!The added comment clearly explains the purpose and scope of the
Configstruct, making the code more self-documenting and easier to understand for new developers.cluster/inventory.go (2)
521-521: Good logging standardization!Changing the log key from
"err"to"error"improves consistency and makes logs more readable for monitoring and debugging tools.
689-689: Consistent logging improvement!The standardization of error logging keys enhances log consistency across the codebase.
bidengine/order.go (3)
30-61: Outstanding documentation enhancement!The detailed comments for each struct field significantly improve code readability and maintainability. This makes it much easier for developers to understand the purpose and lifecycle of each component.
🧰 Tools
🪛 golangci-lint (1.64.8)
35-35:
configuraitonis a misspelling ofconfiguration(misspell)
🪛 GitHub Check: lint
[failure] 35-35:
configuraitonis a misspelling ofconfiguration(misspell)
176-191: Excellent channel documentation!The detailed comments explaining the purpose of each channel variable greatly enhance code clarity and make the complex async logic much easier to follow.
236-236: Consistent logging key standardization!The systematic change from
"err"to"error"in log messages improves consistency across the codebase and enhances log readability for monitoring and debugging tools.Also applies to: 334-334, 350-350, 373-373, 409-409, 440-440, 475-475, 491-491, 583-583
bidengine/service.go (2)
107-139: Comprehensive service documentation!The detailed comments for the
servicestruct fields provide excellent clarity on the purpose and relationships between components. This significantly improves code maintainability and developer onboarding.
264-264: Consistent logging standardization!The change from
"err"to"error"in logging keys maintains consistency with the logging improvements across the entire codebase.Also applies to: 273-273, 295-295
|
@cloud-j-luna - struct field comments and standardization on "error" make sense and LGTM. @troian - as it pertains to comment standardization - again all looks good from my perspective. Any concerns on commenting style/content form your perspective? |
…ssues (akash-network#295) This commit addresses two related issues that prevented proper deployment and access to services using shared memory (SHM) volumes and caused lease-shell to fail when attempting to access StatefulSet workloads. ## Problem Analysis 1. **SHM Volume Mount Naming Mismatch**: Services with SHM storage failed to deploy with error "volumeMounts[0].name: Not found: 'test-shm'". The issue was inconsistent naming between volume creation and volume mount creation in the workload builder. 2. **Incorrect StatefulSet Detection**: The lease-shell feature failed with "statefulsets.apps 'test' not found" because ServiceStatus() incorrectly determined workload type by checking if any storage parameter had a mount path, rather than checking for persistent storage. ## Root Cause ### Volume Naming Issue - Volume mounts used: `fmt.Sprintf("%s-%s", service.Name, params.Name)` - Volumes used: `fmt.Sprintf("%s-%s", service.Name, storage.Name)` - For SHM volumes, `params.Name` and `storage.Name` could differ ### StatefulSet Detection Issue - ServiceStatus() checked `param.Mount \!= ""` to determine StatefulSet - Deploy() checked `storage.Attributes.Find(sdl.StorageAttributePersistent)` - This mismatch caused ServiceStatus to look for wrong workload type ## Solution ### Volume Mount Fix - Ensured both volume mounts and volumes use consistent naming - Volume mounts: `fmt.Sprintf("%s-%s", service.Name, params.Name)` - Volumes: `fmt.Sprintf("%s-%s", service.Name, storage.Name)` - PVC names: `fmt.Sprintf("%s-%s", service.Name, storage.Name)` ### StatefulSet Detection Fix - Updated ServiceStatus() to use same logic as Deploy() - Changed from checking `param.Mount \!= ""` - To checking `storage.Attributes.Find(sdl.StorageAttributePersistent).AsBool()` - Added comprehensive comment explaining the requirement for consistency ## Testing Verified fix resolves both issues: - SHM volumes mount successfully without naming conflicts - lease-shell correctly identifies and connects to StatefulSet workloads - Deployment type detection matches between ServiceStatus and Deploy Resolves: akash-network#295
…ssues (akash-network#295) This commit addresses two related issues that prevented proper deployment and access to services using shared memory (SHM) volumes and caused lease-shell to fail when attempting to access StatefulSet workloads. ## Problem Analysis 1. **SHM Volume Mount Naming Mismatch**: Services with SHM storage failed to deploy with error "volumeMounts[0].name: Not found: 'test-shm'". The issue was inconsistent naming between volume creation and volume mount creation in the workload builder. 2. **Incorrect StatefulSet Detection**: The lease-shell feature failed with "statefulsets.apps 'test' not found" because ServiceStatus() incorrectly determined workload type by checking if any storage parameter had a mount path, rather than checking for persistent storage. ## Root Cause ### Volume Naming Issue - Volume mounts used: `fmt.Sprintf("%s-%s", service.Name, params.Name)` - Volumes used: `fmt.Sprintf("%s-%s", service.Name, storage.Name)` - For SHM volumes, `params.Name` and `storage.Name` could differ ### StatefulSet Detection Issue - ServiceStatus() checked `param.Mount \!= ""` to determine StatefulSet - Deploy() checked `storage.Attributes.Find(sdl.StorageAttributePersistent)` - This mismatch caused ServiceStatus to look for wrong workload type ## Solution ### Volume Mount Fix - Ensured both volume mounts and volumes use consistent naming - Volume mounts: `fmt.Sprintf("%s-%s", service.Name, params.Name)` - Volumes: `fmt.Sprintf("%s-%s", service.Name, storage.Name)` - PVC names: `fmt.Sprintf("%s-%s", service.Name, storage.Name)` ### StatefulSet Detection Fix - Updated ServiceStatus() to use same logic as Deploy() - Changed from checking `param.Mount \!= ""` - To checking `storage.Attributes.Find(sdl.StorageAttributePersistent).AsBool()` - Added comprehensive comment explaining the requirement for consistency ## Testing Verified fix resolves both issues: - SHM volumes mount successfully without naming conflicts - lease-shell correctly identifies and connects to StatefulSet workloads - Deployment type detection matches between ServiceStatus and Deploy Resolves: akash-network#295
…ssues (akash-network#295) This commit addresses two related issues that prevented proper deployment and access to services using shared memory (SHM) volumes and caused lease-shell to fail when attempting to access StatefulSet workloads. ## Problem Analysis 1. **SHM Volume Mount Naming Mismatch**: Services with SHM storage failed to deploy with error "volumeMounts[0].name: Not found: 'test-shm'". The issue was inconsistent naming between volume creation and volume mount creation in the workload builder. 2. **Incorrect StatefulSet Detection**: The lease-shell feature failed with "statefulsets.apps 'test' not found" because ServiceStatus() incorrectly determined workload type by checking if any storage parameter had a mount path, rather than checking for persistent storage. ## Root Cause ### Volume Naming Issue - Volume mounts used: `fmt.Sprintf("%s-%s", service.Name, params.Name)` - Volumes used: `fmt.Sprintf("%s-%s", service.Name, storage.Name)` - For SHM volumes, `params.Name` and `storage.Name` could differ ### StatefulSet Detection Issue - ServiceStatus() checked `param.Mount \!= ""` to determine StatefulSet - Deploy() checked `storage.Attributes.Find(sdl.StorageAttributePersistent)` - This mismatch caused ServiceStatus to look for wrong workload type ## Solution ### Volume Mount Fix - Ensured both volume mounts and volumes use consistent naming - Volume mounts: `fmt.Sprintf("%s-%s", service.Name, params.Name)` - Volumes: `fmt.Sprintf("%s-%s", service.Name, storage.Name)` - PVC names: `fmt.Sprintf("%s-%s", service.Name, storage.Name)` ### StatefulSet Detection Fix - Updated ServiceStatus() to use same logic as Deploy() - Changed from checking `param.Mount \!= ""` - To checking `storage.Attributes.Find(sdl.StorageAttributePersistent).AsBool()` - Added comprehensive comment explaining the requirement for consistency ## Testing Verified fix resolves both issues: - SHM volumes mount successfully without naming conflicts - lease-shell correctly identifies and connects to StatefulSet workloads - Deployment type detection matches between ServiceStatus and Deploy Resolves: akash-network#295
akash-network#295) This commit addresses two critical issues in the Akash provider: 1. StatefulSet Detection Fix: - Fixed incorrect logic in ServiceStatus that was checking for any mounted storage instead of persistent storage to determine workload type - The bug caused services with RAM volumes to incorrectly attempt StatefulSet operations, resulting in "statefulsets.apps not found" errors - Now correctly checks storage.Attributes.Find(sdl.StorageAttributePersistent) to match the deployment creation logic 2. WebSocket Error Handling in lease-shell: - Fixed improper error handling when ServiceStatus fails during shell operations - Previously used http.Error() on WebSocket connections, causing protocol violations - Now properly uses WebSocket writer with LeaseShellCodeFailure and logs errors - Prevents silent failures and improves debugging for shell access issues Added comprehensive unit tests for StatefulSet detection logic covering: - Services with persistent storage (should use StatefulSet) - Services with non-persistent storage (should use Deployment) - Services with no storage (should use Deployment) These fixes ensure proper workload type detection and improve error visibility for lease-shell operations, resolving issues with shell access to deployments using RAM volumes.
akash-network#295) This commit addresses two critical issues in the Akash provider: 1. StatefulSet Detection Fix: - Fixed incorrect logic in ServiceStatus that was checking for any mounted storage instead of persistent storage to determine workload type - The bug caused services with RAM volumes to incorrectly attempt StatefulSet operations, resulting in "statefulsets.apps not found" errors - Now correctly checks storage.Attributes.Find(sdl.StorageAttributePersistent) to match the deployment creation logic 2. WebSocket Error Handling in lease-shell: - Fixed improper error handling when ServiceStatus fails during shell operations - Previously used http.Error() on WebSocket connections, causing protocol violations - Now properly uses WebSocket writer with LeaseShellCodeFailure and logs errors - Prevents silent failures and improves debugging for shell access issues Added comprehensive unit tests for StatefulSet detection logic covering: - Services with persistent storage (should use StatefulSet) - Services with non-persistent storage (should use Deployment) - Services with no storage (should use Deployment) These fixes ensure proper workload type detection and improve error visibility for lease-shell operations, resolving issues with shell access to deployments using RAM volumes.
This PR adds further documentation in to the bid engine codebase and includes consistency improvements in error logging to allow better instrumentation of errors on logging solutions.
Summary by CodeRabbit
"err"to"error"across several components for consistency.