Skip to content

refactor: restructure codebase into modular packages#37

Merged
iomz merged 6 commits intomainfrom
refactor/restructure-codebase
Nov 13, 2025
Merged

refactor: restructure codebase into modular packages#37
iomz merged 6 commits intomainfrom
refactor/restructure-codebase

Conversation

@iomz
Copy link
Owner

@iomz iomz commented Nov 12, 2025

Summary

This PR refactors the golemu codebase to improve maintainability, organization, and follow Go best practices.

Changes

Code Organization

  • ✅ Extracted configuration to config/ package
  • ✅ Extracted API handlers to api/ package
  • ✅ Extracted tag management to tag/ package
  • ✅ Extracted LLRP connection handling to connection/ package
  • ✅ Extracted server mode to server/ package
  • ✅ Moved main.go to cmd/golemu/main.go for standard Go project layout

Code Quality Improvements

  • ✅ Extracted common LLRP header reading logic to reduce duplication
  • ✅ Removed unnecessary runtime.Gosched() and time.Sleep() calls
  • ✅ Improved error handling throughout (functions now return errors)
  • ✅ Fixed deprecated gin API usage (BindWithShouldBindJSON)
  • ✅ Updated HTTP status codes to standard values
  • ✅ Removed unused code, variables, and fields
  • ✅ Added constants for magic numbers (LLRP header size)

Documentation

  • ✅ Updated README with correct installation instructions for new structure

Benefits

  • Better maintainability: Code is now organized into logical packages
  • Easier to test: Modular structure allows for better unit testing
  • Standard Go layout: Follows Go community best practices
  • Installable: Can be installed via go install github.com/iomz/golemu/cmd/golemu@latest
  • Less duplication: Common LLRP logic is now shared
  • Better error handling: More robust error handling throughout

Testing

  • ✅ All code compiles successfully
  • ✅ No linter errors
  • ✅ Binary builds and runs correctly
  • ✅ Help output works as expected

Breaking Changes

None - this is a refactoring that maintains the same functionality and CLI interface.


Note

Restructures the monolithic app into modular packages with a new tag management HTTP API, refined LLRP client/server/simulator flows, shared LLRP parsing, and comprehensive tests plus CI updates.

  • Core architecture:
    • Split monolith into packages: config/, api/, connection/, server/, tag/; move entrypoint to cmd/golemu/.
    • Add thread-safe tag ManagerService and LLRP Handler; extract shared LLRP header/body readers (connection/llrp_common.go).
  • HTTP API:
    • New endpoints POST/DELETE/GET /api/v1/tags with validation and proper status codes.
  • CLI and modes:
    • Implement and wire client, server, and simulator flows with improved connection/retry and message handling; bump version to 0.2.0.
  • Testing:
    • Add unit tests across config, api, connection (client, handler, simulator, LLRP parsing), server, and tag.
  • CI/Docs:
    • Add Docker build-and-publish workflow; expand Codecov to test all packages (./...).
    • Update README install/build instructions; update LICENSE year; extend .gitignore.

Written by Cursor Bugbot for commit 3e6b187. This will update automatically on new commits. Configure here.

Summary by CodeRabbit

  • New Features

    • Modular CLI with client/server/simulator modes, HTTP tag create/delete/list API, LLRP client/server/simulator behavior, and a thread-safe tag management service.
  • Documentation

    • Reworked installation/build instructions with verification/PATH note; updated LICENSE year to 2025 and author handle format.
  • Tests

    • Extensive unit tests added across config, API, LLRP parsing, connection/simulator, server, and tag manager.
  • CI

    • Added Docker build-and-publish workflow; test coverage now runs across all packages.
  • Chore

    • Updated .gitignore entries.

- Extract configuration to config/ package
- Extract API handlers to api/ package
- Extract tag management to tag/ package
- Extract LLRP connection handling to connection/ package
- Extract server mode to server/ package
- Move main.go to cmd/golemu/main.go for standard Go layout
- Extract common LLRP header reading logic
- Remove unnecessary runtime.Gosched() and time.Sleep() calls
- Improve error handling throughout
- Fix deprecated gin API usage (BindWith -> ShouldBindJSON)
- Update HTTP status codes to standard values
- Remove unused code and variables
- Update README with correct installation instructions
@coderabbitai
Copy link

coderabbitai bot commented Nov 12, 2025

Note

Other AI code review bot(s) detected

CodeRabbit has detected other AI code review bot(s) in this pull request and will avoid duplicating their findings in the review comments. This may lead to a less comprehensive review.

Walkthrough

Breaks the previous monolith into modular packages (config, api, connection, server, tag), adds a new cmd entrypoint, implements LLRP client/handler/simulator, exposes a Gin HTTP API for tag management, adds many unit tests and CI workflows, updates README and LICENSE year, and removes the old main.go.

Changes

Cohort / File(s) Summary
Metadata & Docs
LICENSE, README.md, .gitignore
Updated LICENSE year to 2025; rewrote README installation/build instructions (new "From Source"/"Build Locally", verification/PATH note, author handle update); added coverage.out and golemu to .gitignore.
CLI / Config
config/config.go, config/config_test.go, cmd/golemu/main.go
Added Kingpin-based configuration and GetConfig(); new cmd/golemu/main.go entrypoint wiring client/server/simulator modes and logging; basic config tests.
API Layer & Tests
api/handlers.go, api/server.go, api/handlers_test.go
New api.Handler with POST/DELETE/GET /tags endpoints using a tag.Manager channel; api.Server to start Gin router on configured port; comprehensive handler unit tests including validation and conflict handling.
Connection / LLRP Core
connection/llrp_common.go, connection/llrp_common_test.go
Added LLRP header/message reader utilities (ReadLLRPHeader/ReadLLRPMessage) with length/EOF validation and tests for parsing and error paths.
Connection: Handler & Client
connection/handler.go, connection/handler_test.go, connection/client.go, connection/client_test.go
New LLRP Handler that handles incoming messages, starts a report loop, emits reader events, and manages atomic message IDs; Client implements a TCP LLRP client with retry/backoff and message handling; extensive tests exercising message types, loops, and error paths.
Connection: Simulator
connection/simulator.go, connection/simulator_test.go
New Simulator that loads .gob event-cycle files and publishes RO_ACCESS_REPORTs over TCP; handles handshake and simulation loop; tests for file loading and cycle logic.
Tag Management
tag/types.go, tag/manager.go, tag/manager_test.go
Introduced ManagementAction enum and Manager struct; implemented ManagerService (thread-safe) processing Add/Delete/Retrieve commands via channels and emitting tag updates conditionally; comprehensive tests for add/delete/retrieve and concurrency semantics.
Server Orchestration
server/server.go, server/server_test.go
New exported Server that initializes channels/services, loads tags from file, starts API and LLRP accept loop, and runs tag manager lifecycle; tests for initialization, tag loading, and the manager loop.
Monolith Removal
main.go
Removed legacy monolithic main.go and its embedded CLI/API/LLRP/tag-manager logic; functionality redistributed across packages.
CI / Packaging
.github/workflows/docker.yml, .github/workflows/codecov.yml
Added workflow to build/push multi-arch Docker images to GHCR; updated codecov workflow to run tests across all packages (./...).
Tests & Misc
various *_test.go
Added many unit tests across api, config, connection, tag, and server packages using mock connections, channel assertions, and concurrency-based checks.

Sequence Diagram(s)

sequenceDiagram
    autonumber
    participant CLI
    participant Config
    participant Server
    participant API
    participant TagMgr
    participant LLRP

    CLI->>Config: parse flags/commands (GetConfig)
    Config-->>CLI: Config

    CLI->>Server: NewServer(config)
    Server->>TagMgr: NewManagerService(tagManagerChan, tagUpdatedChan, isConnAlive)
    Server->>Server: loadTags() from file
    Server->>API: NewServer(apiPort, tagManagerChan)

    par Start services
      Server->>API: Start() (goroutine)
      Server->>Server: runTagManager(signals) (goroutine)
    end

    Server->>LLRP: listen & accept connections
    LLRP->>Server: connection established → send READER_EVENT_NOTIFICATION

    Note over API,TagMgr: API tag operations
    API->>TagMgr: POST/DELETE /tags → Manager cmd over channel
    TagMgr->>TagMgr: Process(Add/Delete/Retrieve)
    TagMgr-->>LLRP: tagUpdatedChan (if connection alive)

    Note over LLRP: Reporting loop
    LLRP->>LLRP: rebuild reports on update
    LLRP->>LLRP: emit RO_ACCESS_REPORT / KEEP_ALIVE (ticker)
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~75 minutes

  • Concurrency, channel orchestration, and lifecycle wiring across packages require careful review.
  • Tests are numerous and exercise async behavior; check for flakiness and race conditions.

Files/areas to focus on:

  • server/server.go — lifecycle, channel orchestration, API startup, signal handling.
  • connection/handler.go & connection/simulator.go — atomic message ID handling, report loop correctness, goroutine lifecycle and error propagation.
  • tag/manager.go — mutex usage, deduplication logic, and safe channel emissions.
  • api/handlers.go & api/handlers_test.go — JSON binding/validation, mapping to Manager commands, and HTTP status handling.
  • connection/llrp_common.go & tests — binary parsing, length checks, and EOF boundary behavior.

Poem

🐇 I hopped from one big file to many bright lanes,
Split readers and tags into tidy domains.
CLI steers the server, the simulator sings,
Tests hum like badges and CI builds wings.
A rabbit applauds: modular hops and new gains!

Pre-merge checks and finishing touches

❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 13.92% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title accurately summarizes the main objective of the PR: restructuring the codebase into modular packages. The changes across config/, api/, tag/, connection/, and server/ packages directly align with this refactoring goal.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch refactor/restructure-codebase

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link

@cursor cursor bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is the final PR Bugbot will review for you during this billing cycle

Your free Bugbot reviews will reset on December 10

Details

You are on the Bugbot Free tier. On this plan, Bugbot will review limited PRs each billing cycle.

To receive Bugbot reviews on all of your PRs, visit the Cursor dashboard to activate Pro and start your 14-day free trial.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 9

🧹 Nitpick comments (5)
connection/client.go (1)

57-84: Consider checking write errors for completeness.

Lines 63 and 68 write to the connection without checking for errors. While acceptable for test-only code, adding error checks would improve robustness.

Optional improvement:

 	case llrp.ReaderEventNotificationHeader:
 		log.WithFields(log.Fields{
 			"Message ID": messageID,
 		}).Info(">>> READER_EVENT_NOTIFICATION")
-		conn.Write(llrp.SetReaderConfig(messageID + 1))
+		if _, err := conn.Write(llrp.SetReaderConfig(messageID + 1)); err != nil {
+			log.Warnf("error writing SET_READER_CONFIG: %v", err)
+		}
 	case llrp.KeepaliveHeader:
 		log.WithFields(log.Fields{
 			"Message ID": messageID,
 		}).Info(">>> KEEP_ALIVE")
-		conn.Write(llrp.KeepaliveAck(messageID + 1))
+		if _, err := conn.Write(llrp.KeepaliveAck(messageID + 1)); err != nil {
+			log.Warnf("error writing KEEP_ALIVE_ACK: %v", err)
+		}
connection/handler.go (2)

45-55: Redundant connection close calls.

Lines 49 and 53 explicitly call conn.Close() before returning, but line 42 already has defer conn.Close(). While calling Close multiple times is safe (idempotent), the explicit calls are redundant and can be removed for clarity.

Apply this diff:

 	for {
 		hdr, _, err := ReadLLRPMessage(conn)
 		if err == io.EOF {
 			log.Info("the client is disconnected, closing LLRP connection")
-			conn.Close()
 			return
 		} else if err != nil {
 			log.Infof("closing LLRP connection due to %s", err.Error())
-			conn.Close()
 			return
 		}

79-84: Simplify keepalive ticker initialization.

Lines 81-84 create an empty ticker then conditionally replace it. This is unnecessarily complex and the zero-value ticker is unused.

Apply this diff:

 func (h *Handler) startReportLoop(conn net.Conn, trds llrp.TagReportDataStack) {
 	roarTicker := time.NewTicker(time.Duration(h.reportInterval) * time.Millisecond)
-	keepaliveTicker := &time.Ticker{}
+	var keepaliveTicker *time.Ticker
 	if h.keepaliveInterval != 0 {
 		keepaliveTicker = time.NewTicker(time.Duration(h.keepaliveInterval) * time.Second)
+	} else {
+		// Create a stopped ticker that will never fire
+		keepaliveTicker = time.NewTicker(time.Hour)
+		keepaliveTicker.Stop()
 	}
tag/manager.go (1)

34-65: Consider consolidating tag update notifications.

The current implementation sends a notification on tagUpdatedChan for every individual tag added or deleted (lines 46, 56). When processing a batch of tags, this generates excessive channel traffic. For example, adding 100 tags would trigger 100 separate notifications.

Apply this diff to send a single notification after processing all tags in the command:

 func (s *ManagerService) Process(cmd Manager) {
 	s.mu.Lock()
 	defer s.mu.Unlock()
 
 	res := []*llrp.Tag{}
+	hasChanges := false
 	switch cmd.Action {
 	case AddTags:
 		for _, t := range cmd.Tags {
 			if i := s.tags.GetIndexOf(t); i < 0 {
 				s.tags = append(s.tags, t)
 				res = append(res, t)
-				if s.isConnAlive.Load() {
-					s.tagUpdatedChan <- s.tags
-				}
+				hasChanges = true
 			}
 		}
 	case DeleteTags:
 		for _, t := range cmd.Tags {
 			if i := s.tags.GetIndexOf(t); i >= 0 {
 				s.tags = append(s.tags[:i], s.tags[i+1:]...)
 				res = append(res, t)
-				if s.isConnAlive.Load() {
-					s.tagUpdatedChan <- s.tags
-				}
+				hasChanges = true
 			}
 		}
 	case RetrieveTags:
 		res = s.tags
 	}
+	
+	if hasChanges && s.isConnAlive.Load() {
+		s.tagUpdatedChan <- s.tags
+	}
+	
 	cmd.Tags = res
 	s.tagManagerChan <- cmd
 }
server/server.go (1)

90-106: Consider adding graceful shutdown for active LLRP connections.

The server spawns a goroutine for each connection (line 105) without tracking active connections. When the server receives SIGTERM, runTagManager calls log.Fatalf (line 135), which immediately terminates the program without closing active LLRP connections gracefully. While this may be acceptable for your use case, LLRP clients might benefit from receiving proper connection close notifications.

Consider adding connection tracking and graceful shutdown:

+	var wg sync.WaitGroup
+	ctx, cancel := context.WithCancel(context.Background())
+	defer cancel()
+
 	// Handle LLRP connections
 	log.Info("starting LLRP connection...")
 	for {
+		// Check for shutdown signal
+		select {
+		case <-ctx.Done():
+			log.Info("Shutting down, waiting for connections to close...")
+			wg.Wait()
+			return 0
+		default:
+		}
+
 		conn, err := l.Accept()
 		if err != nil {
 			log.Error(err)
 			continue
 		}
 		log.Info("LLRP connection initiated")
 
 		if err := s.llrpHandler.SendReaderEventNotification(conn); err != nil {
 			log.Errorf("error sending READER_EVENT_NOTIFICATION: %v", err)
 			conn.Close()
 			continue
 		}
-		go s.llrpHandler.HandleRequest(conn, s.tagService.GetTags())
+		wg.Add(1)
+		go func(c net.Conn) {
+			defer wg.Done()
+			defer c.Close()
+			s.llrpHandler.HandleRequest(c, s.tagService.GetTags())
+		}(conn)
 	}

And update runTagManager to signal shutdown instead of fatal exit:

 func (s *Server) runTagManager(signals chan os.Signal) {
 	for {
 		select {
 		case cmd := <-s.tagManagerChan:
 			s.tagService.Process(cmd)
 		case sig := <-signals:
-			log.Fatalf("%v", sig)
+			log.Infof("Received signal %v, initiating shutdown", sig)
+			return
 		}
 	}
 }
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between e71de63 and 47ad016.

📒 Files selected for processing (14)
  • LICENSE (1 hunks)
  • README.md (2 hunks)
  • api/handlers.go (1 hunks)
  • api/server.go (1 hunks)
  • cmd/golemu/main.go (1 hunks)
  • config/config.go (1 hunks)
  • connection/client.go (1 hunks)
  • connection/handler.go (1 hunks)
  • connection/llrp_common.go (1 hunks)
  • connection/simulator.go (1 hunks)
  • main.go (0 hunks)
  • server/server.go (1 hunks)
  • tag/manager.go (1 hunks)
  • tag/types.go (1 hunks)
💤 Files with no reviewable changes (1)
  • main.go
🧰 Additional context used
🧬 Code graph analysis (9)
api/server.go (3)
api/handlers.go (2)
  • Handler (18-20)
  • NewHandler (23-27)
server/server.go (2)
  • NewServer (41-63)
  • Server (24-38)
tag/types.go (1)
  • Manager (22-25)
server/server.go (5)
tag/types.go (1)
  • Manager (22-25)
tag/manager.go (2)
  • ManagerService (15-21)
  • NewManagerService (24-31)
api/handlers.go (2)
  • Handler (18-20)
  • NewHandler (23-27)
connection/handler.go (2)
  • Handler (18-25)
  • NewHandler (28-38)
api/server.go (2)
  • NewServer (21-26)
  • Server (15-18)
cmd/golemu/main.go (4)
config/config.go (16)
  • App (18-18)
  • Version (15-15)
  • GetConfig (59-73)
  • Debug (21-21)
  • Client (31-31)
  • IP (24-24)
  • Port (26-26)
  • Server (34-34)
  • APIPort (35-35)
  • PDU (27-27)
  • ReportInterval (28-28)
  • KeepaliveInterval (25-25)
  • InitialMessageID (22-22)
  • File (36-36)
  • Simulator (39-39)
  • SimulationDir (40-40)
connection/client.go (2)
  • Client (18-21)
  • NewClient (24-29)
server/server.go (2)
  • Server (24-38)
  • NewServer (41-63)
connection/simulator.go (2)
  • Simulator (25-32)
  • NewSimulator (35-45)
connection/client.go (2)
config/config.go (1)
  • Client (31-31)
connection/llrp_common.go (1)
  • ReadLLRPMessage (49-64)
connection/handler.go (1)
connection/llrp_common.go (1)
  • ReadLLRPMessage (49-64)
connection/simulator.go (2)
config/config.go (1)
  • Simulator (39-39)
connection/llrp_common.go (1)
  • ReadLLRPMessage (49-64)
tag/manager.go (1)
tag/types.go (4)
  • Manager (22-25)
  • AddTags (16-16)
  • DeleteTags (18-18)
  • RetrieveTags (14-14)
api/handlers.go (1)
tag/types.go (4)
  • Manager (22-25)
  • AddTags (16-16)
  • DeleteTags (18-18)
  • RetrieveTags (14-14)
config/config.go (3)
api/server.go (1)
  • Server (15-18)
server/server.go (1)
  • Server (24-38)
connection/simulator.go (1)
  • Simulator (25-32)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Cursor Bugbot
🔇 Additional comments (20)
LICENSE (1)

2-2: LGTM!

Copyright year update is appropriate.

README.md (2)

23-52: LGTM!

Installation instructions are clear, well-organized, and correctly reference the new cmd/golemu structure. The PATH note is a helpful addition for users.


124-124: LGTM!

Author attribution update to GitHub handle format is appropriate.

tag/types.go (1)

9-25: LGTM!

Clean and idiomatic type definitions for tag management. The use of iota for ManagementAction constants and the simple Manager struct provide a clear API for tag operations.

connection/llrp_common.go (2)

13-23: LGTM!

The constant LLRPHeaderSize correctly represents the 10-byte LLRP header structure (2 + 4 + 4), and the LLRPHeader struct fields are appropriately typed.


25-46: LGTM!

ReadLLRPHeader correctly uses io.ReadFull to ensure exact byte counts are read, and properly converts Big Endian network bytes to native types. Error handling is appropriate.

connection/client.go (2)

17-29: LGTM!

Client struct and constructor follow standard Go patterns.


31-55: LGTM!

The client connection logic is appropriate for a test-only client. The infinite retry with fixed delay is acceptable for this use case, and error handling correctly distinguishes between EOF (clean disconnect) and other errors.

connection/simulator.go (3)

24-45: LGTM!

Simulator struct and constructor follow standard patterns with appropriate use of atomic operations for message ID management.


93-110: LGTM with consideration for error handling.

The message handling loop is well-structured. The use of log.Fatalf for errors aligns with the simulator's fail-fast philosophy, though it prevents graceful cleanup.


133-145: LGTM!

Event cycle wrapping logic is correct, and error handling is consistent with proper error propagation.

connection/handler.go (3)

17-38: LGTM!

Handler struct design is sound with appropriate use of atomic types for concurrent access to shared state.


86-142: LGTM with minor observation.

The report loop logic is sound. The pattern of setting isConnAlive to true at the start of each iteration (line 104) and checking it at the end (line 133) ensures proper error detection and graceful goroutine termination.


144-158: LGTM!

Both methods are straightforward and correctly implemented. The atomic operations for connection liveness tracking are appropriate.

config/config.go (2)

13-41: LGTM!

The CLI configuration using kingpin follows a standard pattern with reasonable defaults. The IP default of "0.0.0.0" (line 24) appropriately binds to all interfaces for server operation.


43-73: LGTM!

The Config struct and GetConfig function follow standard kingpin patterns for configuration management. All fields are properly mapped and dereferenced.

api/server.go (1)

21-38: LGTM!

The API server setup is clean and follows good practices. The constructor properly wires the handler with the tag management channel, and the Start method correctly sets up routes and delegates error handling to the caller.

server/server.go (2)

79-85: Verify handling of API server startup failures.

If the API server fails to start (e.g., port already in use), the error is only logged and the LLRP server continues running. This means the server could be running without API functionality available. Confirm whether this behavior is intentional or if the server should exit when the API fails to start.

If the API is required for operation, consider exiting on startup failure:

 	// Start API server
 	apiServer := api.NewServer(s.apiPort, s.tagManagerChan)
 	go func() {
 		if err := apiServer.Start(); err != nil {
-			log.Errorf("API server error: %v", err)
+			log.Fatalf("API server failed to start: %v", err)
 		}
 	}()

Or add a startup check:

	// Start API server with startup confirmation
	apiReady := make(chan error, 1)
	go func() {
		// Signal readiness or error
		apiReady <- apiServer.Start()
	}()
	
	// Wait briefly for startup or error
	select {
	case err := <-apiReady:
		if err != nil {
			log.Fatalf("API server failed to start: %v", err)
		}
	case <-time.After(100 * time.Millisecond):
		log.Info("API server starting...")
	}

109-127: LGTM!

The tag loading implementation is well-structured with proper error handling, file existence checks, and informative logging. The use of structured logging fields makes debugging easier.

api/handlers.go (1)

114-128: The review comment is incorrect—the synchronous channel pattern is safe.

This code implements a correct bidirectional request-response pattern on an unbuffered channel. The Process() function in tag/manager.go receives the Manager struct by value, populates its Tags field with results (line 63), and sends it back on the channel (line 64). The handler's send (line 119) blocks until the manager receives, processes, and responds—no deadlock occurs. The pattern is intentional and sound for synchronous RPC-style communication within the concurrent architecture.

Likely an incorrect or invalid review comment.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 5

♻️ Duplicate comments (1)
api/handlers.go (1)

92-116: Stop deleting tags when returning an error

Lines 94-110 keep sending delete commands even after hasError flips true. The handler then returns "error", so the HTTP layer answers 404 “not found” while some tags were actually removed. Validate the batch first (like reqAddTag), and only emit delete commands when every record parses successfully.

-func (h *Handler) reqDeleteTag(req []llrp.TagRecord) string {
-	hasError := false
-	for _, t := range req {
-		tagObj, err := llrp.NewTag(&llrp.TagRecord{
-			PCBits: t.PCBits,
-			EPC:    t.EPC,
-		})
-		if err != nil {
-			log.Errorf("error creating tag: %v", err)
-			hasError = true
-			continue
-		}
-
-		deleteCmd := tag.Manager{
-			Action: tag.DeleteTags,
-			Tags:   []*llrp.Tag{tagObj},
-		}
-		h.tagManagerChan <- deleteCmd
-	}
-
-	if hasError {
-		return "error"
-	}
+func (h *Handler) reqDeleteTag(req []llrp.TagRecord) string {
+	validTags := make([]*llrp.Tag, 0, len(req))
+	for _, t := range req {
+		tagObj, err := llrp.NewTag(&llrp.TagRecord{
+			PCBits: t.PCBits,
+			EPC:    t.EPC,
+		})
+		if err != nil {
+			log.Errorf("error creating tag: %v", err)
+			return "error"
+		}
+		validTags = append(validTags, tagObj)
+	}
+
+	for _, tagObj := range validTags {
+		deleteCmd := tag.Manager{
+			Action: tag.DeleteTags,
+			Tags:   []*llrp.Tag{tagObj},
+		}
+		h.tagManagerChan <- deleteCmd
+	}
+
 	log.Debugf("delete %v", req)
 	return "delete"
 }
🧹 Nitpick comments (2)
config/config_test.go (2)

11-24: Basic smoke test – consider validating default values.

This test verifies that GetConfig() doesn't panic and returns a non-nil struct. While this serves as a minimal smoke test, it doesn't validate actual behavior. Consider enhancing it to parse command-line arguments or verify that default values (e.g., IP "0.0.0.0", Port 5084, PDU 1500) are correctly reflected in the Config struct.


26-45: Consolidate with TestGetConfig to eliminate duplication.

This test duplicates the logic of TestGetConfig – both call GetConfig(), check for nil, and access fields without validating values. The only difference is that this test accesses all fields while TestGetConfig accesses three. Consider merging these into a single test that accesses all fields, reducing code duplication and maintenance overhead.

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 47ad016 and ad38662.

📒 Files selected for processing (12)
  • .github/workflows/docker.yml (1 hunks)
  • .gitignore (1 hunks)
  • api/handlers.go (1 hunks)
  • api/handlers_test.go (1 hunks)
  • cmd/golemu/main.go (1 hunks)
  • config/config_test.go (1 hunks)
  • connection/handler.go (1 hunks)
  • connection/llrp_common.go (1 hunks)
  • connection/llrp_common_test.go (1 hunks)
  • connection/simulator.go (1 hunks)
  • server/server.go (1 hunks)
  • tag/manager_test.go (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (2)
  • cmd/golemu/main.go
  • server/server.go
🧰 Additional context used
🧬 Code graph analysis (7)
connection/llrp_common_test.go (1)
connection/llrp_common.go (3)
  • LLRPHeaderSize (16-16)
  • ReadLLRPHeader (27-47)
  • ReadLLRPMessage (50-71)
config/config_test.go (1)
config/config.go (13)
  • GetConfig (59-73)
  • IP (24-24)
  • Port (26-26)
  • PDU (27-27)
  • Debug (21-21)
  • InitialMessageID (22-22)
  • InitialKeepaliveID (23-23)
  • KeepaliveInterval (25-25)
  • ReportInterval (28-28)
  • APIPort (35-35)
  • File (36-36)
  • SimulationDir (40-40)
  • Version (15-15)
connection/simulator.go (2)
config/config.go (1)
  • Simulator (39-39)
connection/llrp_common.go (1)
  • ReadLLRPMessage (50-71)
api/handlers_test.go (2)
tag/types.go (4)
  • Manager (22-25)
  • RetrieveTags (14-14)
  • AddTags (16-16)
  • DeleteTags (18-18)
api/handlers.go (1)
  • NewHandler (23-27)
api/handlers.go (2)
tag/types.go (4)
  • Manager (22-25)
  • AddTags (16-16)
  • DeleteTags (18-18)
  • RetrieveTags (14-14)
connection/handler.go (2)
  • NewHandler (29-40)
  • Handler (18-26)
tag/manager_test.go (2)
tag/types.go (4)
  • Manager (22-25)
  • AddTags (16-16)
  • DeleteTags (18-18)
  • RetrieveTags (14-14)
tag/manager.go (1)
  • NewManagerService (24-31)
connection/handler.go (1)
connection/llrp_common.go (1)
  • ReadLLRPMessage (50-71)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Cursor Bugbot
🔇 Additional comments (2)
.gitignore (1)

10-11: Good additions for Go best practices.

Adding coverage.out and golemu to .gitignore aligns with standard Go project conventions: test coverage artifacts and compiled binaries should not be tracked in version control.

config/config_test.go (1)

47-51: LGTM!

This is a simple and effective smoke test that ensures the Version constant is populated.

@codecov
Copy link

codecov bot commented Nov 12, 2025

Codecov Report

❌ Patch coverage is 59.77383% with 249 lines in your changes missing coverage. Please review.
✅ Project coverage is 59.77%. Comparing base (e71de63) to head (3e6b187).
⚠️ Report is 7 commits behind head on main.

Files with missing lines Patch % Lines
connection/simulator.go 22.68% 90 Missing and 2 partials ⚠️
connection/client.go 41.42% 38 Missing and 3 partials ⚠️
cmd/golemu/main.go 0.00% 36 Missing ⚠️
server/server.go 55.22% 30 Missing ⚠️
connection/handler.go 80.41% 14 Missing and 5 partials ⚠️
api/handlers.go 90.32% 10 Missing and 2 partials ⚠️
api/server.go 0.00% 11 Missing ⚠️
connection/llrp_common.go 80.00% 3 Missing and 3 partials ⚠️
tag/manager.go 96.15% 1 Missing and 1 partial ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##            main      #37       +/-   ##
==========================================
+ Coverage   0.00%   59.77%   +59.77%     
==========================================
  Files          1       10        +9     
  Lines        456      619      +163     
==========================================
+ Hits           0      370      +370     
+ Misses       456      233      -223     
- Partials       0       16       +16     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🧹 Nitpick comments (2)
connection/simulator.go (2)

123-123: Unreachable code.

The infinite loop starting at line 102 never exits normally, making this return statement unreachable. Consider removing it or documenting why it exists.


161-188: Consider adding cancellation for cleaner shutdown.

The simulation goroutine lacks an explicit cancellation mechanism. It exits only when a send error occurs (line 179-182), meaning it continues running until the connection is closed. While this works because Run() defers conn.Close(), a cleaner approach would pass a context or done channel to allow explicit cancellation when the main loop exits or receives a signal.

Example with context:

func (s *Simulator) startSimulationLoop(ctx context.Context, conn net.Conn, simulationFiles []string, eventCycle *int, trds llrp.TagReportDataStack, roarTicker *time.Ticker) {
	go func() {
		defer s.loopStarted.Store(false)
		for {
			select {
			case <-ctx.Done():
				log.Info("simulation loop cancelled")
				return
			case <-roarTicker.C:
				tags, err := s.loadTagsForNextEventCycle(simulationFiles, eventCycle)
				// ... rest of logic
			}
		}
	}()
}

Then in Run(), create a context that gets cancelled on signal or before returning.

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between ad38662 and 6557abb.

📒 Files selected for processing (6)
  • .github/workflows/codecov.yml (1 hunks)
  • api/handlers_test.go (1 hunks)
  • connection/llrp_common.go (1 hunks)
  • connection/simulator.go (1 hunks)
  • tag/manager.go (1 hunks)
  • tag/manager_test.go (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (2)
  • connection/llrp_common.go
  • api/handlers_test.go
🧰 Additional context used
🧬 Code graph analysis (3)
tag/manager_test.go (2)
tag/types.go (4)
  • Manager (22-25)
  • AddTags (16-16)
  • DeleteTags (18-18)
  • RetrieveTags (14-14)
tag/manager.go (1)
  • NewManagerService (24-31)
tag/manager.go (1)
tag/types.go (4)
  • Manager (22-25)
  • AddTags (16-16)
  • DeleteTags (18-18)
  • RetrieveTags (14-14)
connection/simulator.go (2)
config/config.go (1)
  • Simulator (39-39)
connection/llrp_common.go (1)
  • ReadLLRPMessage (50-74)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Cursor Bugbot
🔇 Additional comments (4)
.github/workflows/codecov.yml (1)

21-21: LGTM!

Expanding test coverage to all packages (./...) correctly aligns with the new modular structure introduced in this PR.

tag/manager_test.go (1)

61-68: Error handling implemented correctly.

All llrp.NewTag calls throughout this test file now properly capture and handle errors, addressing the previous review concern. The pattern if err != nil { t.Fatalf(...) } ensures test failures are explicit when tag construction fails.

tag/manager.go (1)

34-78: Deadlock fix confirmed.

The channel sends (lines 74-77) now occur after releasing the mutex (line 71), resolving the previous deadlock concern. The tag copies are prepared under lock (lines 50-51, 63-64) and sent safely afterwards.

connection/simulator.go (1)

65-78: Previous review issues resolved.

The signal handling, error returns, and atomic guard for duplicate loops have all been properly implemented, addressing earlier concerns.

- Enhanced docstrings across all packages following Go conventions
- Added detailed parameter documentation for all exported functions
- Improved type documentation with usage examples and context
- Added comprehensive test coverage for connection, server, and simulator packages
- Fixed deadlock issues in tests with proper timeouts and goroutine management
- Tests now cover handler, client, simulator, and server functionality
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 5

🧹 Nitpick comments (2)
connection/handler.go (1)

94-97: Consider continuing instead of returning on unknown headers.

Returning immediately on an unknown header (line 96) closes the connection and prevents processing any subsequent messages. If the client sends an unsupported message followed by valid ones, the handler will exit prematurely. Consider logging a warning and continuing the loop unless this is a deliberate protocol violation:

 		default:
 			log.Warnf("unknown header: %v", hdr.Header)
-			return
+			// Continue processing; don't close connection for unrecognized messages
 		}

Or, if closing is intentional for protocol safety, add a comment explaining why.

connection/simulator.go (1)

180-207: Consider adding graceful cancellation mechanism.

The simulation loop goroutine only exits on send errors (lines 198-200). If the connection closes or a signal arrives, this goroutine continues consuming ticker events until the next send attempt fails. While this eventually terminates, it's not immediate.

For cleaner shutdown, consider accepting a context or monitoring the done channel pattern:

-func (s *Simulator) startSimulationLoop(conn net.Conn, simulationFiles []string, eventCycle *int, trds llrp.TagReportDataStack, roarTicker *time.Ticker) chan struct{} {
+func (s *Simulator) startSimulationLoop(ctx context.Context, conn net.Conn, simulationFiles []string, eventCycle *int, trds llrp.TagReportDataStack, roarTicker *time.Ticker) chan struct{} {
 	done := make(chan struct{})
 	go func() {
 		defer s.loopStarted.Store(false)
 		defer close(done)
 		for {
 			select {
+			case <-ctx.Done():
+				log.Info("simulation loop cancelled")
+				return
 			case <-roarTicker.C:
 				tags, err := s.loadTagsForNextEventCycle(simulationFiles, eventCycle)
 				...
+			}
 		}
 	}()
 	return done
 }

This is optional since the current implementation will eventually stop, but explicit cancellation improves responsiveness.

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 6557abb and 758732d.

📒 Files selected for processing (14)
  • api/handlers.go (1 hunks)
  • api/server.go (1 hunks)
  • config/config.go (1 hunks)
  • connection/client.go (1 hunks)
  • connection/client_test.go (1 hunks)
  • connection/handler.go (1 hunks)
  • connection/handler_test.go (1 hunks)
  • connection/llrp_common.go (1 hunks)
  • connection/simulator.go (1 hunks)
  • connection/simulator_test.go (1 hunks)
  • server/server.go (1 hunks)
  • server/server_test.go (1 hunks)
  • tag/manager.go (1 hunks)
  • tag/types.go (1 hunks)
✅ Files skipped from review due to trivial changes (1)
  • connection/simulator_test.go
🚧 Files skipped from review as they are similar to previous changes (3)
  • connection/client.go
  • api/server.go
  • api/handlers.go
🧰 Additional context used
🧬 Code graph analysis (8)
connection/client_test.go (2)
connection/client.go (1)
  • NewClient (30-35)
connection/llrp_common.go (1)
  • ReadLLRPMessage (60-84)
connection/handler.go (1)
connection/llrp_common.go (1)
  • ReadLLRPMessage (60-84)
tag/manager.go (1)
tag/types.go (4)
  • Manager (23-26)
  • AddTags (16-16)
  • DeleteTags (18-18)
  • RetrieveTags (14-14)
config/config.go (3)
api/server.go (1)
  • Server (16-19)
server/server.go (1)
  • Server (26-40)
connection/simulator.go (1)
  • Simulator (29-37)
server/server_test.go (2)
server/server.go (1)
  • NewServer (53-75)
tag/types.go (4)
  • Manager (23-26)
  • AddTags (16-16)
  • RetrieveTags (14-14)
  • DeleteTags (18-18)
connection/handler_test.go (1)
connection/handler.go (1)
  • NewHandler (39-50)
connection/simulator.go (2)
config/config.go (1)
  • Simulator (39-39)
connection/llrp_common.go (1)
  • ReadLLRPMessage (60-84)
server/server.go (5)
tag/types.go (1)
  • Manager (23-26)
tag/manager.go (2)
  • ManagerService (18-24)
  • NewManagerService (32-39)
api/handlers.go (2)
  • Handler (19-21)
  • NewHandler (27-31)
connection/handler.go (2)
  • Handler (20-28)
  • NewHandler (39-50)
api/server.go (2)
  • NewServer (26-31)
  • Server (16-19)
🪛 GitHub Actions: Codecov
connection/handler.go

[warning] 108-108: DATA RACE detected involving startReportLoop and concurrent writes to mock connection during tests.

connection/handler_test.go

[error] 128-128: DATA RACE detected in TestHandler_HandleRequest_SetReaderConfig (race during execution).


[error] 230-236: DATA RACE detected in TestHandler_startReportLoop (race during execution).


[error] 287-287: DATA RACE detected in TestHandler_startReportLoop_WithKeepalive (race during execution).

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Cursor Bugbot
🔇 Additional comments (6)
connection/handler.go (2)

17-50: LGTM: Goroutine guard properly implemented.

The reportLoopStarted atomic bool and its initialization correctly address the past review concern about preventing multiple concurrent report loops. The constructor properly initializes all atomic guards.


167-186: LGTM!

Both SendReaderEventNotification and IsConnAlive are implemented correctly with proper atomic operations and error handling.

connection/simulator.go (4)

26-59: LGTM: Atomic guard for simulation loop properly added.

The loopStarted field and its initialization correctly implement the guard mechanism suggested in past reviews to prevent duplicate simulation loops.


80-87: LGTM: Graceful signal handling implemented.

The signal handler now properly closes the listener and stops signal delivery instead of calling os.Exit(0), allowing deferred cleanup to execute. This correctly addresses the past review concern.


89-96: LGTM: Graceful shutdown on listener close handled correctly.

The code properly checks for net.ErrClosed and returns 0, treating listener closure as a normal shutdown condition. This correctly implements the past review suggestion.


145-164: LGTM: Error handling improved.

The function now returns an error (line 161) instead of calling log.Fatalf, making it testable and allowing callers to handle the error gracefully. This correctly addresses the past review concern.

iomz added 2 commits November 13, 2025 08:34
- Fix simulator skipping cycle 0 by removing premature eventCycle increment
- Fix data race in handler_test.go by using WaitGroup to synchronize goroutines
- Fix data race in handler.go by moving isConnAlive.Store(true) before loop
- Fix data race in simulator.go by closing connection when simulation loop terminates
- Fix data race in tag/manager.go by returning deep copies instead of exposing internal slice
- Make mockConn thread-safe for concurrent writes in tests
- Fix timestamp calculation in simulator.go to use UnixNano for epoch microseconds
- Refactor handler_test.go to use synchronization primitives instead of time.Sleep
- Fix syntax errors in client.go: add context import, complete if statement, add connection handling
- Fix server tests to avoid log.Fatalf termination by removing signal sending
- Add waitForCondition helper function for better test synchronization
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

♻️ Duplicate comments (1)
connection/handler_test.go (1)

304-315: Shared buffer is still accessed without synchronization

startReportLoop keeps writing to the underlying bytes.Buffer while this test calls conn.Len(). mockConn.Len() just forwards to bytes.Buffer.Len, so the same race the previous review flagged will still be reported by go test -race. Please capture the writes in a thread-safe way (e.g., reuse the signaling wrapper with a mutex-protected counter or another synchronized sink) before asserting on the data.

🧹 Nitpick comments (2)
api/handlers.go (2)

129-180: Document or buffer the synchronous request-response pattern.

The function sends commands to tagManagerChan and immediately reads responses from the same channel (lines 163-165). This synchronous pattern requires:

  1. The channel to be buffered, OR
  2. A separate goroutine actively processing commands from tagManagerChan

Without either, this will deadlock. While the tests demonstrate the goroutine pattern, the code doesn't document this requirement. Consider adding a comment in the Handler struct or the NewHandler constructor documenting that callers must service tagManagerChan asynchronously.

Example documentation:

 // Handler processes HTTP API requests for tag management operations.
 // It provides REST endpoints for adding, deleting, and retrieving tags.
+//
+// Important: The tagManagerChan must be serviced by a separate goroutine
+// that processes Manager commands and sends responses back on the same channel.
+// Failure to do so will cause request handlers to deadlock.
 type Handler struct {
 	tagManagerChan chan tag.Manager
 }

154-169: Consider batching tags for better performance.

The current implementation sends tags one at a time and waits for each response, making N synchronous round-trips for N tags. While this enables per-tag duplicate detection, it's slower than batching. If it's acceptable to return "some tags added, some duplicates" rather than rejecting the entire request when any duplicate exists, you could send all tags in a single Manager command.

Trade-offs:

  • Current approach: precise per-tag duplicate reporting, slower
  • Batched approach: faster, but duplicate detection is all-or-nothing at Manager level

Given the current API semantics (return error if any tag is duplicate), the current approach is correct. Only refactor if API semantics change to support partial success.

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 758732d and 3e6b187.

📒 Files selected for processing (9)
  • api/handlers.go (1 hunks)
  • api/handlers_test.go (1 hunks)
  • connection/client.go (1 hunks)
  • connection/handler.go (1 hunks)
  • connection/handler_test.go (1 hunks)
  • connection/llrp_common_test.go (1 hunks)
  • connection/simulator.go (1 hunks)
  • server/server_test.go (1 hunks)
  • tag/manager.go (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (2)
  • server/server_test.go
  • connection/handler.go
🧰 Additional context used
🧬 Code graph analysis (7)
tag/manager.go (1)
tag/types.go (4)
  • Manager (23-26)
  • AddTags (16-16)
  • DeleteTags (18-18)
  • RetrieveTags (14-14)
api/handlers.go (1)
tag/types.go (4)
  • Manager (23-26)
  • AddTags (16-16)
  • DeleteTags (18-18)
  • RetrieveTags (14-14)
api/handlers_test.go (2)
tag/types.go (4)
  • Manager (23-26)
  • AddTags (16-16)
  • DeleteTags (18-18)
  • RetrieveTags (14-14)
api/handlers.go (1)
  • NewHandler (57-61)
connection/client.go (2)
config/config.go (1)
  • Client (31-31)
connection/llrp_common.go (1)
  • ReadLLRPMessage (60-84)
connection/simulator.go (1)
connection/llrp_common.go (2)
  • LLRPHeader (22-26)
  • ReadLLRPMessage (60-84)
connection/handler_test.go (1)
connection/handler.go (1)
  • NewHandler (39-50)
connection/llrp_common_test.go (1)
connection/llrp_common.go (3)
  • LLRPHeaderSize (17-17)
  • ReadLLRPHeader (32-52)
  • ReadLLRPMessage (60-84)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Cursor Bugbot
🔇 Additional comments (9)
connection/simulator.go (1)

68-155: LGTM - Signal handling and lifecycle management properly implemented.

The refactored code successfully addresses the previous concerns:

  • Signal handler now closes the listener instead of calling os.Exit (lines 86-87)
  • Accept error handling checks for net.ErrClosed to distinguish graceful shutdown (lines 93-95)
  • The main loop uses select to coordinate message reading with simulation lifecycle (lines 126-153)
  • The nil simulationDone channel pattern (line 112) is correct - it will be ignored by select until assigned
connection/llrp_common_test.go (2)

17-142: LGTM - Comprehensive test coverage for LLRP parsing.

The tests properly exercise:

  • Header size constant validation
  • Valid and invalid header parsing with appropriate error expectations
  • Message reading with and without body data
  • Error conditions (incomplete data, EOF, UnexpectedEOF)

The test structure is clean and follows Go testing best practices.


144-206: Well-designed mock with proper thread safety.

The mockConn implementation correctly provides thread-safe access to the writer via mutex (lines 160-161), and the Len() helper (lines 170-177) maintains this safety. This is essential for tests that spawn goroutines.

api/handlers_test.go (2)

25-89: Excellent test structure with proper error handling.

The test correctly:

  • Validates llrp.NewTag errors (lines 42-44)
  • Uses a ready channel to synchronize goroutine startup before making the HTTP request (lines 48-60)
  • Simulates the tag manager's request-response pattern

This synchronization approach prevents race conditions in tests.


251-296: Good defensive programming in test setup.

Creating tags before spawning the goroutine (lines 256-259) ensures that any tag creation errors fail fast with clear diagnostics, rather than causing confusing failures inside the goroutine.

connection/client.go (2)

47-92: Robust connection retry logic with proper context handling.

The implementation correctly handles:

  • Context cancellation during connection attempts (lines 59-62, 70-72)
  • Exponential backoff with a reasonable cap (1s → 30s max, lines 52-69)
  • EOF as graceful disconnect (lines 82-84)
  • Connection lifecycle logging

94-131: Correct messageID overflow handling.

Line 97 wraps to 1 instead of 0 when messageID overflows, which is appropriate for LLRP message ID semantics. The message handling covers the key LLRP headers and logs unknown types appropriately.

tag/manager.go (2)

47-101: Excellent fixes for thread safety and deadlock prevention.

The refactored code successfully addresses all previous concerns:

  1. Deadlock prevention (lines 94, 97-100): Mutex is released before any channel sends, eliminating the potential for deadlock when receivers aren't ready.

  2. Safe deletion (lines 68-81): Instead of modifying the slice during iteration, the code builds a new toKeep slice, avoiding index shifting bugs.

  3. Copy before channel send (lines 63-64, 84-85): Tags are copied while holding the mutex, then the copy is sent after unlocking, preventing data races.


105-112: Proper data race prevention with defensive copying.

GetTags returns a copy rather than exposing the internal slice backing array, preventing callers from racing with concurrent modifications. This is the correct pattern for thread-safe accessors.

@iomz iomz merged commit 44c03b8 into main Nov 13, 2025
6 checks passed
@iomz iomz deleted the refactor/restructure-codebase branch November 13, 2025 00:04
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant