Integrate LSF allocator and lazy widget extension#161
Merged
Conversation
This introduces a twin_raw_* indirection layer between the memory stats tracking (memstats.c) and the underlying allocator. When CONFIG_MEM_TLSF is enabled, allocations route through a self-contained TLSF (Two-Level Segregated Fit) allocator backed by a static pool, providing O(1) bounded-time malloc/free for real-time embedded targets. When disabled, twin_raw_* inlines to libc malloc/free with zero overhead. The TLSF implementation is stripped to static-pool essentials (pool_init, malloc, realloc, free) and uses the portable twin_clz/twin_clzll helpers instead of raw compiler intrinsics. A key deviation from upstream TLSF: block_find_free() trims to the actual request size rather than the bin minimum, avoiding catastrophic internal fragmentation when small requests hit large bins. Move optional widget fields (callback, callback_data, want_focus) into a lazily-allocated twin_widget_ext_t block. Widgets that never register a callback keep ext == NULL and save 16 bytes per widget (64-bit) or 8 bytes (32-bit). The copy_geom field stays inline in the base struct for app-level direct access. All call sites in box.c, button.c, and widget.c updated to use inline accessors.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
This introduces a twin_raw_* indirection layer between the memory stats tracking (memstats.c) and the underlying allocator. When CONFIG_MEM_TLSF is enabled, allocations route through a self-contained TLSF (Two-Level Segregated Fit) allocator backed by a static pool, providing O(1) bounded-time malloc/free for real-time embedded targets. When disabled, twin_raw_* inlines to libc malloc/free with zero overhead.
The TLSF implementation is stripped to static-pool essentials (pool_init, malloc, realloc, free) and uses the portable twin_clz/twin_clzll helpers instead of raw compiler intrinsics. A key deviation from upstream TLSF: block_find_free() trims to the actual request size rather than the bin minimum, avoiding catastrophic internal fragmentation when small requests hit large bins.
Move optional widget fields (callback, callback_data, want_focus) into a lazily-allocated twin_widget_ext_t block. Widgets that never register a callback keep ext == NULL and save 16 bytes per widget (64-bit) or 8 bytes (32-bit). The copy_geom field stays inline in the base struct for app-level direct access. All call sites in box.c, button.c, and widget.c updated to use inline accessors.
Summary by cubic
Add an optional TLSF-backed allocator and a lazy widget extension to reduce RAM use and improve real-time behavior. Adds a
twin_raw_*allocator layer and moves optional widget fields out of the base struct.New Features
twin_raw_*allocator indirection; memory stats now track allocations via this layer. The tracking table itself stays on libc to avoid recursion.Migration
Written for commit f224621. Summary will update on new commits.