Skip to content

Add memory-aware batching to CPU path#12

Closed
kalidke wants to merge 1 commit intomainfrom
fix/cpu-batching
Closed

Add memory-aware batching to CPU path#12
kalidke wants to merge 1 commit intomainfrom
fix/cpu-batching

Conversation

@kalidke
Copy link
Copy Markdown
Member

@kalidke kalidke commented Feb 2, 2026

Summary

Details

The CPU path in getboxes previously processed entire image stacks at once, which caused OOM errors on large inputs (e.g., 8.8GB input could spike to 37GB+ with DoG filter intermediates).

Memory multipliers:

  • Standard DoG: 6x input size
  • sCMOS variance-weighted: 10x input size

Test plan

  • Existing unit tests pass
  • Performance benchmarks pass
  • Test with large image stack that previously caused OOM

🤖 Generated with Claude Code

The CPU path in getboxes previously processed entire image stacks at once,
causing OOM on large inputs. This mirrors the GPU batching logic:

- Use Sys.free_memory() to determine available RAM
- Calculate batch size based on memory requirements (6x for standard, 10x for sCMOS)
- Process large stacks in batches with proper frame offset tracking
- Call GC.gc(false) between batches to release allocations

Fixes issue #11.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
@kalidke
Copy link
Copy Markdown
Member Author

kalidke commented Feb 2, 2026

Superseded - CPU batching fix moved to PR #10 (tuple-pattern)

@kalidke kalidke closed this Feb 2, 2026
@kalidke kalidke deleted the fix/cpu-batching branch February 8, 2026 21:19
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant