refactor: restructure codebase into modular packages#37
Conversation
- Extract configuration to config/ package - Extract API handlers to api/ package - Extract tag management to tag/ package - Extract LLRP connection handling to connection/ package - Extract server mode to server/ package - Move main.go to cmd/golemu/main.go for standard Go layout - Extract common LLRP header reading logic - Remove unnecessary runtime.Gosched() and time.Sleep() calls - Improve error handling throughout - Fix deprecated gin API usage (BindWith -> ShouldBindJSON) - Update HTTP status codes to standard values - Remove unused code and variables - Update README with correct installation instructions
|
Note Other AI code review bot(s) detectedCodeRabbit has detected other AI code review bot(s) in this pull request and will avoid duplicating their findings in the review comments. This may lead to a less comprehensive review. WalkthroughBreaks the previous monolith into modular packages (config, api, connection, server, tag), adds a new cmd entrypoint, implements LLRP client/handler/simulator, exposes a Gin HTTP API for tag management, adds many unit tests and CI workflows, updates README and LICENSE year, and removes the old Changes
Sequence Diagram(s)sequenceDiagram
autonumber
participant CLI
participant Config
participant Server
participant API
participant TagMgr
participant LLRP
CLI->>Config: parse flags/commands (GetConfig)
Config-->>CLI: Config
CLI->>Server: NewServer(config)
Server->>TagMgr: NewManagerService(tagManagerChan, tagUpdatedChan, isConnAlive)
Server->>Server: loadTags() from file
Server->>API: NewServer(apiPort, tagManagerChan)
par Start services
Server->>API: Start() (goroutine)
Server->>Server: runTagManager(signals) (goroutine)
end
Server->>LLRP: listen & accept connections
LLRP->>Server: connection established → send READER_EVENT_NOTIFICATION
Note over API,TagMgr: API tag operations
API->>TagMgr: POST/DELETE /tags → Manager cmd over channel
TagMgr->>TagMgr: Process(Add/Delete/Retrieve)
TagMgr-->>LLRP: tagUpdatedChan (if connection alive)
Note over LLRP: Reporting loop
LLRP->>LLRP: rebuild reports on update
LLRP->>LLRP: emit RO_ACCESS_REPORT / KEEP_ALIVE (ticker)
Estimated code review effort🎯 4 (Complex) | ⏱️ ~75 minutes
Files/areas to focus on:
Poem
Pre-merge checks and finishing touches❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
Comment |
There was a problem hiding this comment.
This is the final PR Bugbot will review for you during this billing cycle
Your free Bugbot reviews will reset on December 10
Details
You are on the Bugbot Free tier. On this plan, Bugbot will review limited PRs each billing cycle.
To receive Bugbot reviews on all of your PRs, visit the Cursor dashboard to activate Pro and start your 14-day free trial.
There was a problem hiding this comment.
Actionable comments posted: 9
🧹 Nitpick comments (5)
connection/client.go (1)
57-84: Consider checking write errors for completeness.Lines 63 and 68 write to the connection without checking for errors. While acceptable for test-only code, adding error checks would improve robustness.
Optional improvement:
case llrp.ReaderEventNotificationHeader: log.WithFields(log.Fields{ "Message ID": messageID, }).Info(">>> READER_EVENT_NOTIFICATION") - conn.Write(llrp.SetReaderConfig(messageID + 1)) + if _, err := conn.Write(llrp.SetReaderConfig(messageID + 1)); err != nil { + log.Warnf("error writing SET_READER_CONFIG: %v", err) + } case llrp.KeepaliveHeader: log.WithFields(log.Fields{ "Message ID": messageID, }).Info(">>> KEEP_ALIVE") - conn.Write(llrp.KeepaliveAck(messageID + 1)) + if _, err := conn.Write(llrp.KeepaliveAck(messageID + 1)); err != nil { + log.Warnf("error writing KEEP_ALIVE_ACK: %v", err) + }connection/handler.go (2)
45-55: Redundant connection close calls.Lines 49 and 53 explicitly call
conn.Close()before returning, but line 42 already hasdefer conn.Close(). While calling Close multiple times is safe (idempotent), the explicit calls are redundant and can be removed for clarity.Apply this diff:
for { hdr, _, err := ReadLLRPMessage(conn) if err == io.EOF { log.Info("the client is disconnected, closing LLRP connection") - conn.Close() return } else if err != nil { log.Infof("closing LLRP connection due to %s", err.Error()) - conn.Close() return }
79-84: Simplify keepalive ticker initialization.Lines 81-84 create an empty ticker then conditionally replace it. This is unnecessarily complex and the zero-value ticker is unused.
Apply this diff:
func (h *Handler) startReportLoop(conn net.Conn, trds llrp.TagReportDataStack) { roarTicker := time.NewTicker(time.Duration(h.reportInterval) * time.Millisecond) - keepaliveTicker := &time.Ticker{} + var keepaliveTicker *time.Ticker if h.keepaliveInterval != 0 { keepaliveTicker = time.NewTicker(time.Duration(h.keepaliveInterval) * time.Second) + } else { + // Create a stopped ticker that will never fire + keepaliveTicker = time.NewTicker(time.Hour) + keepaliveTicker.Stop() }tag/manager.go (1)
34-65: Consider consolidating tag update notifications.The current implementation sends a notification on
tagUpdatedChanfor every individual tag added or deleted (lines 46, 56). When processing a batch of tags, this generates excessive channel traffic. For example, adding 100 tags would trigger 100 separate notifications.Apply this diff to send a single notification after processing all tags in the command:
func (s *ManagerService) Process(cmd Manager) { s.mu.Lock() defer s.mu.Unlock() res := []*llrp.Tag{} + hasChanges := false switch cmd.Action { case AddTags: for _, t := range cmd.Tags { if i := s.tags.GetIndexOf(t); i < 0 { s.tags = append(s.tags, t) res = append(res, t) - if s.isConnAlive.Load() { - s.tagUpdatedChan <- s.tags - } + hasChanges = true } } case DeleteTags: for _, t := range cmd.Tags { if i := s.tags.GetIndexOf(t); i >= 0 { s.tags = append(s.tags[:i], s.tags[i+1:]...) res = append(res, t) - if s.isConnAlive.Load() { - s.tagUpdatedChan <- s.tags - } + hasChanges = true } } case RetrieveTags: res = s.tags } + + if hasChanges && s.isConnAlive.Load() { + s.tagUpdatedChan <- s.tags + } + cmd.Tags = res s.tagManagerChan <- cmd }server/server.go (1)
90-106: Consider adding graceful shutdown for active LLRP connections.The server spawns a goroutine for each connection (line 105) without tracking active connections. When the server receives SIGTERM,
runTagManagercallslog.Fatalf(line 135), which immediately terminates the program without closing active LLRP connections gracefully. While this may be acceptable for your use case, LLRP clients might benefit from receiving proper connection close notifications.Consider adding connection tracking and graceful shutdown:
+ var wg sync.WaitGroup + ctx, cancel := context.WithCancel(context.Background()) + defer cancel() + // Handle LLRP connections log.Info("starting LLRP connection...") for { + // Check for shutdown signal + select { + case <-ctx.Done(): + log.Info("Shutting down, waiting for connections to close...") + wg.Wait() + return 0 + default: + } + conn, err := l.Accept() if err != nil { log.Error(err) continue } log.Info("LLRP connection initiated") if err := s.llrpHandler.SendReaderEventNotification(conn); err != nil { log.Errorf("error sending READER_EVENT_NOTIFICATION: %v", err) conn.Close() continue } - go s.llrpHandler.HandleRequest(conn, s.tagService.GetTags()) + wg.Add(1) + go func(c net.Conn) { + defer wg.Done() + defer c.Close() + s.llrpHandler.HandleRequest(c, s.tagService.GetTags()) + }(conn) }And update
runTagManagerto signal shutdown instead of fatal exit:func (s *Server) runTagManager(signals chan os.Signal) { for { select { case cmd := <-s.tagManagerChan: s.tagService.Process(cmd) case sig := <-signals: - log.Fatalf("%v", sig) + log.Infof("Received signal %v, initiating shutdown", sig) + return } } }
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (14)
LICENSE(1 hunks)README.md(2 hunks)api/handlers.go(1 hunks)api/server.go(1 hunks)cmd/golemu/main.go(1 hunks)config/config.go(1 hunks)connection/client.go(1 hunks)connection/handler.go(1 hunks)connection/llrp_common.go(1 hunks)connection/simulator.go(1 hunks)main.go(0 hunks)server/server.go(1 hunks)tag/manager.go(1 hunks)tag/types.go(1 hunks)
💤 Files with no reviewable changes (1)
- main.go
🧰 Additional context used
🧬 Code graph analysis (9)
api/server.go (3)
api/handlers.go (2)
Handler(18-20)NewHandler(23-27)server/server.go (2)
NewServer(41-63)Server(24-38)tag/types.go (1)
Manager(22-25)
server/server.go (5)
tag/types.go (1)
Manager(22-25)tag/manager.go (2)
ManagerService(15-21)NewManagerService(24-31)api/handlers.go (2)
Handler(18-20)NewHandler(23-27)connection/handler.go (2)
Handler(18-25)NewHandler(28-38)api/server.go (2)
NewServer(21-26)Server(15-18)
cmd/golemu/main.go (4)
config/config.go (16)
App(18-18)Version(15-15)GetConfig(59-73)Debug(21-21)Client(31-31)IP(24-24)Port(26-26)Server(34-34)APIPort(35-35)PDU(27-27)ReportInterval(28-28)KeepaliveInterval(25-25)InitialMessageID(22-22)File(36-36)Simulator(39-39)SimulationDir(40-40)connection/client.go (2)
Client(18-21)NewClient(24-29)server/server.go (2)
Server(24-38)NewServer(41-63)connection/simulator.go (2)
Simulator(25-32)NewSimulator(35-45)
connection/client.go (2)
config/config.go (1)
Client(31-31)connection/llrp_common.go (1)
ReadLLRPMessage(49-64)
connection/handler.go (1)
connection/llrp_common.go (1)
ReadLLRPMessage(49-64)
connection/simulator.go (2)
config/config.go (1)
Simulator(39-39)connection/llrp_common.go (1)
ReadLLRPMessage(49-64)
tag/manager.go (1)
tag/types.go (4)
Manager(22-25)AddTags(16-16)DeleteTags(18-18)RetrieveTags(14-14)
api/handlers.go (1)
tag/types.go (4)
Manager(22-25)AddTags(16-16)DeleteTags(18-18)RetrieveTags(14-14)
config/config.go (3)
api/server.go (1)
Server(15-18)server/server.go (1)
Server(24-38)connection/simulator.go (1)
Simulator(25-32)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Cursor Bugbot
🔇 Additional comments (20)
LICENSE (1)
2-2: LGTM!Copyright year update is appropriate.
README.md (2)
23-52: LGTM!Installation instructions are clear, well-organized, and correctly reference the new
cmd/golemustructure. The PATH note is a helpful addition for users.
124-124: LGTM!Author attribution update to GitHub handle format is appropriate.
tag/types.go (1)
9-25: LGTM!Clean and idiomatic type definitions for tag management. The use of iota for
ManagementActionconstants and the simpleManagerstruct provide a clear API for tag operations.connection/llrp_common.go (2)
13-23: LGTM!The constant
LLRPHeaderSizecorrectly represents the 10-byte LLRP header structure (2 + 4 + 4), and theLLRPHeaderstruct fields are appropriately typed.
25-46: LGTM!
ReadLLRPHeadercorrectly usesio.ReadFullto ensure exact byte counts are read, and properly converts Big Endian network bytes to native types. Error handling is appropriate.connection/client.go (2)
17-29: LGTM!Client struct and constructor follow standard Go patterns.
31-55: LGTM!The client connection logic is appropriate for a test-only client. The infinite retry with fixed delay is acceptable for this use case, and error handling correctly distinguishes between EOF (clean disconnect) and other errors.
connection/simulator.go (3)
24-45: LGTM!Simulator struct and constructor follow standard patterns with appropriate use of atomic operations for message ID management.
93-110: LGTM with consideration for error handling.The message handling loop is well-structured. The use of
log.Fatalffor errors aligns with the simulator's fail-fast philosophy, though it prevents graceful cleanup.
133-145: LGTM!Event cycle wrapping logic is correct, and error handling is consistent with proper error propagation.
connection/handler.go (3)
17-38: LGTM!Handler struct design is sound with appropriate use of atomic types for concurrent access to shared state.
86-142: LGTM with minor observation.The report loop logic is sound. The pattern of setting
isConnAliveto true at the start of each iteration (line 104) and checking it at the end (line 133) ensures proper error detection and graceful goroutine termination.
144-158: LGTM!Both methods are straightforward and correctly implemented. The atomic operations for connection liveness tracking are appropriate.
config/config.go (2)
13-41: LGTM!The CLI configuration using kingpin follows a standard pattern with reasonable defaults. The IP default of "0.0.0.0" (line 24) appropriately binds to all interfaces for server operation.
43-73: LGTM!The
Configstruct andGetConfigfunction follow standard kingpin patterns for configuration management. All fields are properly mapped and dereferenced.api/server.go (1)
21-38: LGTM!The API server setup is clean and follows good practices. The constructor properly wires the handler with the tag management channel, and the Start method correctly sets up routes and delegates error handling to the caller.
server/server.go (2)
79-85: Verify handling of API server startup failures.If the API server fails to start (e.g., port already in use), the error is only logged and the LLRP server continues running. This means the server could be running without API functionality available. Confirm whether this behavior is intentional or if the server should exit when the API fails to start.
If the API is required for operation, consider exiting on startup failure:
// Start API server apiServer := api.NewServer(s.apiPort, s.tagManagerChan) go func() { if err := apiServer.Start(); err != nil { - log.Errorf("API server error: %v", err) + log.Fatalf("API server failed to start: %v", err) } }()Or add a startup check:
// Start API server with startup confirmation apiReady := make(chan error, 1) go func() { // Signal readiness or error apiReady <- apiServer.Start() }() // Wait briefly for startup or error select { case err := <-apiReady: if err != nil { log.Fatalf("API server failed to start: %v", err) } case <-time.After(100 * time.Millisecond): log.Info("API server starting...") }
109-127: LGTM!The tag loading implementation is well-structured with proper error handling, file existence checks, and informative logging. The use of structured logging fields makes debugging easier.
api/handlers.go (1)
114-128: The review comment is incorrect—the synchronous channel pattern is safe.This code implements a correct bidirectional request-response pattern on an unbuffered channel. The
Process()function intag/manager.goreceives theManagerstruct by value, populates itsTagsfield with results (line 63), and sends it back on the channel (line 64). The handler's send (line 119) blocks until the manager receives, processes, and responds—no deadlock occurs. The pattern is intentional and sound for synchronous RPC-style communication within the concurrent architecture.Likely an incorrect or invalid review comment.
…k, handling llrp headers carefully
There was a problem hiding this comment.
Actionable comments posted: 5
♻️ Duplicate comments (1)
api/handlers.go (1)
92-116: Stop deleting tags when returning an errorLines 94-110 keep sending delete commands even after
hasErrorflips true. The handler then returns"error", so the HTTP layer answers 404 “not found” while some tags were actually removed. Validate the batch first (likereqAddTag), and only emit delete commands when every record parses successfully.-func (h *Handler) reqDeleteTag(req []llrp.TagRecord) string { - hasError := false - for _, t := range req { - tagObj, err := llrp.NewTag(&llrp.TagRecord{ - PCBits: t.PCBits, - EPC: t.EPC, - }) - if err != nil { - log.Errorf("error creating tag: %v", err) - hasError = true - continue - } - - deleteCmd := tag.Manager{ - Action: tag.DeleteTags, - Tags: []*llrp.Tag{tagObj}, - } - h.tagManagerChan <- deleteCmd - } - - if hasError { - return "error" - } +func (h *Handler) reqDeleteTag(req []llrp.TagRecord) string { + validTags := make([]*llrp.Tag, 0, len(req)) + for _, t := range req { + tagObj, err := llrp.NewTag(&llrp.TagRecord{ + PCBits: t.PCBits, + EPC: t.EPC, + }) + if err != nil { + log.Errorf("error creating tag: %v", err) + return "error" + } + validTags = append(validTags, tagObj) + } + + for _, tagObj := range validTags { + deleteCmd := tag.Manager{ + Action: tag.DeleteTags, + Tags: []*llrp.Tag{tagObj}, + } + h.tagManagerChan <- deleteCmd + } + log.Debugf("delete %v", req) return "delete" }
🧹 Nitpick comments (2)
config/config_test.go (2)
11-24: Basic smoke test – consider validating default values.This test verifies that
GetConfig()doesn't panic and returns a non-nil struct. While this serves as a minimal smoke test, it doesn't validate actual behavior. Consider enhancing it to parse command-line arguments or verify that default values (e.g., IP "0.0.0.0", Port 5084, PDU 1500) are correctly reflected in the Config struct.
26-45: Consolidate with TestGetConfig to eliminate duplication.This test duplicates the logic of
TestGetConfig– both callGetConfig(), check for nil, and access fields without validating values. The only difference is that this test accesses all fields whileTestGetConfigaccesses three. Consider merging these into a single test that accesses all fields, reducing code duplication and maintenance overhead.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (12)
.github/workflows/docker.yml(1 hunks).gitignore(1 hunks)api/handlers.go(1 hunks)api/handlers_test.go(1 hunks)cmd/golemu/main.go(1 hunks)config/config_test.go(1 hunks)connection/handler.go(1 hunks)connection/llrp_common.go(1 hunks)connection/llrp_common_test.go(1 hunks)connection/simulator.go(1 hunks)server/server.go(1 hunks)tag/manager_test.go(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (2)
- cmd/golemu/main.go
- server/server.go
🧰 Additional context used
🧬 Code graph analysis (7)
connection/llrp_common_test.go (1)
connection/llrp_common.go (3)
LLRPHeaderSize(16-16)ReadLLRPHeader(27-47)ReadLLRPMessage(50-71)
config/config_test.go (1)
config/config.go (13)
GetConfig(59-73)IP(24-24)Port(26-26)PDU(27-27)Debug(21-21)InitialMessageID(22-22)InitialKeepaliveID(23-23)KeepaliveInterval(25-25)ReportInterval(28-28)APIPort(35-35)File(36-36)SimulationDir(40-40)Version(15-15)
connection/simulator.go (2)
config/config.go (1)
Simulator(39-39)connection/llrp_common.go (1)
ReadLLRPMessage(50-71)
api/handlers_test.go (2)
tag/types.go (4)
Manager(22-25)RetrieveTags(14-14)AddTags(16-16)DeleteTags(18-18)api/handlers.go (1)
NewHandler(23-27)
api/handlers.go (2)
tag/types.go (4)
Manager(22-25)AddTags(16-16)DeleteTags(18-18)RetrieveTags(14-14)connection/handler.go (2)
NewHandler(29-40)Handler(18-26)
tag/manager_test.go (2)
tag/types.go (4)
Manager(22-25)AddTags(16-16)DeleteTags(18-18)RetrieveTags(14-14)tag/manager.go (1)
NewManagerService(24-31)
connection/handler.go (1)
connection/llrp_common.go (1)
ReadLLRPMessage(50-71)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Cursor Bugbot
🔇 Additional comments (2)
.gitignore (1)
10-11: Good additions for Go best practices.Adding
coverage.outandgolemuto.gitignorealigns with standard Go project conventions: test coverage artifacts and compiled binaries should not be tracked in version control.config/config_test.go (1)
47-51: LGTM!This is a simple and effective smoke test that ensures the
Versionconstant is populated.
…the previous one exits
Codecov Report❌ Patch coverage is Additional details and impacted files@@ Coverage Diff @@
## main #37 +/- ##
==========================================
+ Coverage 0.00% 59.77% +59.77%
==========================================
Files 1 10 +9
Lines 456 619 +163
==========================================
+ Hits 0 370 +370
+ Misses 456 233 -223
- Partials 0 16 +16 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
There was a problem hiding this comment.
Actionable comments posted: 2
🧹 Nitpick comments (2)
connection/simulator.go (2)
123-123: Unreachable code.The infinite loop starting at line 102 never exits normally, making this return statement unreachable. Consider removing it or documenting why it exists.
161-188: Consider adding cancellation for cleaner shutdown.The simulation goroutine lacks an explicit cancellation mechanism. It exits only when a send error occurs (line 179-182), meaning it continues running until the connection is closed. While this works because
Run()defersconn.Close(), a cleaner approach would pass a context or done channel to allow explicit cancellation when the main loop exits or receives a signal.Example with context:
func (s *Simulator) startSimulationLoop(ctx context.Context, conn net.Conn, simulationFiles []string, eventCycle *int, trds llrp.TagReportDataStack, roarTicker *time.Ticker) { go func() { defer s.loopStarted.Store(false) for { select { case <-ctx.Done(): log.Info("simulation loop cancelled") return case <-roarTicker.C: tags, err := s.loadTagsForNextEventCycle(simulationFiles, eventCycle) // ... rest of logic } } }() }Then in
Run(), create a context that gets cancelled on signal or before returning.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (6)
.github/workflows/codecov.yml(1 hunks)api/handlers_test.go(1 hunks)connection/llrp_common.go(1 hunks)connection/simulator.go(1 hunks)tag/manager.go(1 hunks)tag/manager_test.go(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (2)
- connection/llrp_common.go
- api/handlers_test.go
🧰 Additional context used
🧬 Code graph analysis (3)
tag/manager_test.go (2)
tag/types.go (4)
Manager(22-25)AddTags(16-16)DeleteTags(18-18)RetrieveTags(14-14)tag/manager.go (1)
NewManagerService(24-31)
tag/manager.go (1)
tag/types.go (4)
Manager(22-25)AddTags(16-16)DeleteTags(18-18)RetrieveTags(14-14)
connection/simulator.go (2)
config/config.go (1)
Simulator(39-39)connection/llrp_common.go (1)
ReadLLRPMessage(50-74)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Cursor Bugbot
🔇 Additional comments (4)
.github/workflows/codecov.yml (1)
21-21: LGTM!Expanding test coverage to all packages (
./...) correctly aligns with the new modular structure introduced in this PR.tag/manager_test.go (1)
61-68: Error handling implemented correctly.All
llrp.NewTagcalls throughout this test file now properly capture and handle errors, addressing the previous review concern. The patternif err != nil { t.Fatalf(...) }ensures test failures are explicit when tag construction fails.tag/manager.go (1)
34-78: Deadlock fix confirmed.The channel sends (lines 74-77) now occur after releasing the mutex (line 71), resolving the previous deadlock concern. The tag copies are prepared under lock (lines 50-51, 63-64) and sent safely afterwards.
connection/simulator.go (1)
65-78: Previous review issues resolved.The signal handling, error returns, and atomic guard for duplicate loops have all been properly implemented, addressing earlier concerns.
- Enhanced docstrings across all packages following Go conventions - Added detailed parameter documentation for all exported functions - Improved type documentation with usage examples and context - Added comprehensive test coverage for connection, server, and simulator packages - Fixed deadlock issues in tests with proper timeouts and goroutine management - Tests now cover handler, client, simulator, and server functionality
There was a problem hiding this comment.
Actionable comments posted: 5
🧹 Nitpick comments (2)
connection/handler.go (1)
94-97: Consider continuing instead of returning on unknown headers.Returning immediately on an unknown header (line 96) closes the connection and prevents processing any subsequent messages. If the client sends an unsupported message followed by valid ones, the handler will exit prematurely. Consider logging a warning and continuing the loop unless this is a deliberate protocol violation:
default: log.Warnf("unknown header: %v", hdr.Header) - return + // Continue processing; don't close connection for unrecognized messages }Or, if closing is intentional for protocol safety, add a comment explaining why.
connection/simulator.go (1)
180-207: Consider adding graceful cancellation mechanism.The simulation loop goroutine only exits on send errors (lines 198-200). If the connection closes or a signal arrives, this goroutine continues consuming ticker events until the next send attempt fails. While this eventually terminates, it's not immediate.
For cleaner shutdown, consider accepting a context or monitoring the done channel pattern:
-func (s *Simulator) startSimulationLoop(conn net.Conn, simulationFiles []string, eventCycle *int, trds llrp.TagReportDataStack, roarTicker *time.Ticker) chan struct{} { +func (s *Simulator) startSimulationLoop(ctx context.Context, conn net.Conn, simulationFiles []string, eventCycle *int, trds llrp.TagReportDataStack, roarTicker *time.Ticker) chan struct{} { done := make(chan struct{}) go func() { defer s.loopStarted.Store(false) defer close(done) for { select { + case <-ctx.Done(): + log.Info("simulation loop cancelled") + return case <-roarTicker.C: tags, err := s.loadTagsForNextEventCycle(simulationFiles, eventCycle) ... + } } }() return done }This is optional since the current implementation will eventually stop, but explicit cancellation improves responsiveness.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (14)
api/handlers.go(1 hunks)api/server.go(1 hunks)config/config.go(1 hunks)connection/client.go(1 hunks)connection/client_test.go(1 hunks)connection/handler.go(1 hunks)connection/handler_test.go(1 hunks)connection/llrp_common.go(1 hunks)connection/simulator.go(1 hunks)connection/simulator_test.go(1 hunks)server/server.go(1 hunks)server/server_test.go(1 hunks)tag/manager.go(1 hunks)tag/types.go(1 hunks)
✅ Files skipped from review due to trivial changes (1)
- connection/simulator_test.go
🚧 Files skipped from review as they are similar to previous changes (3)
- connection/client.go
- api/server.go
- api/handlers.go
🧰 Additional context used
🧬 Code graph analysis (8)
connection/client_test.go (2)
connection/client.go (1)
NewClient(30-35)connection/llrp_common.go (1)
ReadLLRPMessage(60-84)
connection/handler.go (1)
connection/llrp_common.go (1)
ReadLLRPMessage(60-84)
tag/manager.go (1)
tag/types.go (4)
Manager(23-26)AddTags(16-16)DeleteTags(18-18)RetrieveTags(14-14)
config/config.go (3)
api/server.go (1)
Server(16-19)server/server.go (1)
Server(26-40)connection/simulator.go (1)
Simulator(29-37)
server/server_test.go (2)
server/server.go (1)
NewServer(53-75)tag/types.go (4)
Manager(23-26)AddTags(16-16)RetrieveTags(14-14)DeleteTags(18-18)
connection/handler_test.go (1)
connection/handler.go (1)
NewHandler(39-50)
connection/simulator.go (2)
config/config.go (1)
Simulator(39-39)connection/llrp_common.go (1)
ReadLLRPMessage(60-84)
server/server.go (5)
tag/types.go (1)
Manager(23-26)tag/manager.go (2)
ManagerService(18-24)NewManagerService(32-39)api/handlers.go (2)
Handler(19-21)NewHandler(27-31)connection/handler.go (2)
Handler(20-28)NewHandler(39-50)api/server.go (2)
NewServer(26-31)Server(16-19)
🪛 GitHub Actions: Codecov
connection/handler.go
[warning] 108-108: DATA RACE detected involving startReportLoop and concurrent writes to mock connection during tests.
connection/handler_test.go
[error] 128-128: DATA RACE detected in TestHandler_HandleRequest_SetReaderConfig (race during execution).
[error] 230-236: DATA RACE detected in TestHandler_startReportLoop (race during execution).
[error] 287-287: DATA RACE detected in TestHandler_startReportLoop_WithKeepalive (race during execution).
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Cursor Bugbot
🔇 Additional comments (6)
connection/handler.go (2)
17-50: LGTM: Goroutine guard properly implemented.The
reportLoopStartedatomic bool and its initialization correctly address the past review concern about preventing multiple concurrent report loops. The constructor properly initializes all atomic guards.
167-186: LGTM!Both
SendReaderEventNotificationandIsConnAliveare implemented correctly with proper atomic operations and error handling.connection/simulator.go (4)
26-59: LGTM: Atomic guard for simulation loop properly added.The
loopStartedfield and its initialization correctly implement the guard mechanism suggested in past reviews to prevent duplicate simulation loops.
80-87: LGTM: Graceful signal handling implemented.The signal handler now properly closes the listener and stops signal delivery instead of calling
os.Exit(0), allowing deferred cleanup to execute. This correctly addresses the past review concern.
89-96: LGTM: Graceful shutdown on listener close handled correctly.The code properly checks for
net.ErrClosedand returns 0, treating listener closure as a normal shutdown condition. This correctly implements the past review suggestion.
145-164: LGTM: Error handling improved.The function now returns an error (line 161) instead of calling
log.Fatalf, making it testable and allowing callers to handle the error gracefully. This correctly addresses the past review concern.
- Fix simulator skipping cycle 0 by removing premature eventCycle increment - Fix data race in handler_test.go by using WaitGroup to synchronize goroutines - Fix data race in handler.go by moving isConnAlive.Store(true) before loop - Fix data race in simulator.go by closing connection when simulation loop terminates - Fix data race in tag/manager.go by returning deep copies instead of exposing internal slice - Make mockConn thread-safe for concurrent writes in tests
- Fix timestamp calculation in simulator.go to use UnixNano for epoch microseconds - Refactor handler_test.go to use synchronization primitives instead of time.Sleep - Fix syntax errors in client.go: add context import, complete if statement, add connection handling - Fix server tests to avoid log.Fatalf termination by removing signal sending - Add waitForCondition helper function for better test synchronization
There was a problem hiding this comment.
Actionable comments posted: 0
♻️ Duplicate comments (1)
connection/handler_test.go (1)
304-315: Shared buffer is still accessed without synchronization
startReportLoopkeeps writing to the underlyingbytes.Bufferwhile this test callsconn.Len().mockConn.Len()just forwards tobytes.Buffer.Len, so the same race the previous review flagged will still be reported bygo test -race. Please capture the writes in a thread-safe way (e.g., reuse the signaling wrapper with a mutex-protected counter or another synchronized sink) before asserting on the data.
🧹 Nitpick comments (2)
api/handlers.go (2)
129-180: Document or buffer the synchronous request-response pattern.The function sends commands to
tagManagerChanand immediately reads responses from the same channel (lines 163-165). This synchronous pattern requires:
- The channel to be buffered, OR
- A separate goroutine actively processing commands from
tagManagerChanWithout either, this will deadlock. While the tests demonstrate the goroutine pattern, the code doesn't document this requirement. Consider adding a comment in the
Handlerstruct or theNewHandlerconstructor documenting that callers must servicetagManagerChanasynchronously.Example documentation:
// Handler processes HTTP API requests for tag management operations. // It provides REST endpoints for adding, deleting, and retrieving tags. +// +// Important: The tagManagerChan must be serviced by a separate goroutine +// that processes Manager commands and sends responses back on the same channel. +// Failure to do so will cause request handlers to deadlock. type Handler struct { tagManagerChan chan tag.Manager }
154-169: Consider batching tags for better performance.The current implementation sends tags one at a time and waits for each response, making N synchronous round-trips for N tags. While this enables per-tag duplicate detection, it's slower than batching. If it's acceptable to return "some tags added, some duplicates" rather than rejecting the entire request when any duplicate exists, you could send all tags in a single
Managercommand.Trade-offs:
- Current approach: precise per-tag duplicate reporting, slower
- Batched approach: faster, but duplicate detection is all-or-nothing at Manager level
Given the current API semantics (return error if any tag is duplicate), the current approach is correct. Only refactor if API semantics change to support partial success.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (9)
api/handlers.go(1 hunks)api/handlers_test.go(1 hunks)connection/client.go(1 hunks)connection/handler.go(1 hunks)connection/handler_test.go(1 hunks)connection/llrp_common_test.go(1 hunks)connection/simulator.go(1 hunks)server/server_test.go(1 hunks)tag/manager.go(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (2)
- server/server_test.go
- connection/handler.go
🧰 Additional context used
🧬 Code graph analysis (7)
tag/manager.go (1)
tag/types.go (4)
Manager(23-26)AddTags(16-16)DeleteTags(18-18)RetrieveTags(14-14)
api/handlers.go (1)
tag/types.go (4)
Manager(23-26)AddTags(16-16)DeleteTags(18-18)RetrieveTags(14-14)
api/handlers_test.go (2)
tag/types.go (4)
Manager(23-26)AddTags(16-16)DeleteTags(18-18)RetrieveTags(14-14)api/handlers.go (1)
NewHandler(57-61)
connection/client.go (2)
config/config.go (1)
Client(31-31)connection/llrp_common.go (1)
ReadLLRPMessage(60-84)
connection/simulator.go (1)
connection/llrp_common.go (2)
LLRPHeader(22-26)ReadLLRPMessage(60-84)
connection/handler_test.go (1)
connection/handler.go (1)
NewHandler(39-50)
connection/llrp_common_test.go (1)
connection/llrp_common.go (3)
LLRPHeaderSize(17-17)ReadLLRPHeader(32-52)ReadLLRPMessage(60-84)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Cursor Bugbot
🔇 Additional comments (9)
connection/simulator.go (1)
68-155: LGTM - Signal handling and lifecycle management properly implemented.The refactored code successfully addresses the previous concerns:
- Signal handler now closes the listener instead of calling
os.Exit(lines 86-87)Accepterror handling checks fornet.ErrClosedto distinguish graceful shutdown (lines 93-95)- The main loop uses
selectto coordinate message reading with simulation lifecycle (lines 126-153)- The nil
simulationDonechannel pattern (line 112) is correct - it will be ignored byselectuntil assignedconnection/llrp_common_test.go (2)
17-142: LGTM - Comprehensive test coverage for LLRP parsing.The tests properly exercise:
- Header size constant validation
- Valid and invalid header parsing with appropriate error expectations
- Message reading with and without body data
- Error conditions (incomplete data, EOF, UnexpectedEOF)
The test structure is clean and follows Go testing best practices.
144-206: Well-designed mock with proper thread safety.The
mockConnimplementation correctly provides thread-safe access to the writer via mutex (lines 160-161), and theLen()helper (lines 170-177) maintains this safety. This is essential for tests that spawn goroutines.api/handlers_test.go (2)
25-89: Excellent test structure with proper error handling.The test correctly:
- Validates
llrp.NewTagerrors (lines 42-44)- Uses a
readychannel to synchronize goroutine startup before making the HTTP request (lines 48-60)- Simulates the tag manager's request-response pattern
This synchronization approach prevents race conditions in tests.
251-296: Good defensive programming in test setup.Creating tags before spawning the goroutine (lines 256-259) ensures that any tag creation errors fail fast with clear diagnostics, rather than causing confusing failures inside the goroutine.
connection/client.go (2)
47-92: Robust connection retry logic with proper context handling.The implementation correctly handles:
- Context cancellation during connection attempts (lines 59-62, 70-72)
- Exponential backoff with a reasonable cap (1s → 30s max, lines 52-69)
- EOF as graceful disconnect (lines 82-84)
- Connection lifecycle logging
94-131: Correct messageID overflow handling.Line 97 wraps to 1 instead of 0 when messageID overflows, which is appropriate for LLRP message ID semantics. The message handling covers the key LLRP headers and logs unknown types appropriately.
tag/manager.go (2)
47-101: Excellent fixes for thread safety and deadlock prevention.The refactored code successfully addresses all previous concerns:
Deadlock prevention (lines 94, 97-100): Mutex is released before any channel sends, eliminating the potential for deadlock when receivers aren't ready.
Safe deletion (lines 68-81): Instead of modifying the slice during iteration, the code builds a new
toKeepslice, avoiding index shifting bugs.Copy before channel send (lines 63-64, 84-85): Tags are copied while holding the mutex, then the copy is sent after unlocking, preventing data races.
105-112: Proper data race prevention with defensive copying.
GetTagsreturns a copy rather than exposing the internal slice backing array, preventing callers from racing with concurrent modifications. This is the correct pattern for thread-safe accessors.
Summary
This PR refactors the golemu codebase to improve maintainability, organization, and follow Go best practices.
Changes
Code Organization
config/packageapi/packagetag/packageconnection/packageserver/packagemain.gotocmd/golemu/main.gofor standard Go project layoutCode Quality Improvements
runtime.Gosched()andtime.Sleep()callsBindWith→ShouldBindJSON)Documentation
Benefits
go install github.com/iomz/golemu/cmd/golemu@latestTesting
Breaking Changes
None - this is a refactoring that maintains the same functionality and CLI interface.
Note
Restructures the monolithic app into modular packages with a new tag management HTTP API, refined LLRP client/server/simulator flows, shared LLRP parsing, and comprehensive tests plus CI updates.
config/,api/,connection/,server/,tag/; move entrypoint tocmd/golemu/.ManagerServiceand LLRPHandler; extract shared LLRP header/body readers (connection/llrp_common.go).POST/DELETE/GET /api/v1/tagswith validation and proper status codes.client,server, andsimulatorflows with improved connection/retry and message handling; bump version to0.2.0.config,api,connection(client, handler, simulator, LLRP parsing),server, andtag../...).READMEinstall/build instructions; updateLICENSEyear; extend.gitignore.Written by Cursor Bugbot for commit 3e6b187. This will update automatically on new commits. Configure here.
Summary by CodeRabbit
New Features
Documentation
Tests
CI
Chore