Skip to content

Conversation

@julienrbrt
Copy link
Member

@julienrbrt julienrbrt commented Oct 30, 2025

ref: #1914

A choice has been made to make this logic in the executor and avoid extending the reaper and the sequencer.
This is because, updating the repeer, means passing down the last fetched da height accross all components.
It adds a lot of complexity otherwise. Adding it in the sequencer may be preferable, but this makes the inclusion in a sync node less straightforward. This is what is being investigated.

Compared to the previous implementation, a forced transaction does not have any structure. It should be the raw structure from the execution client. This is to keep ev-node know nothing about the transaction. No signature checks, no validation of correctness. The execution client must make sure to reject gibberish transactions.

  • implement for executor
    • add size checks, to make sure batch doesn't go over max bytes
  • implement for syncer
  • requirements:
    • syncer must be able to detect if sequencer was malicious and hasn't included a forced tx
    • fetched inclusion of the txs bytes within the block data
    • if that happened, reject all future sequencer blocks
      • eventually, become based and keep produce block
  • tool for submitting forced txs in celestia easily -> feat(server): add force inclusion cmd ev-abci#295
  • make sure epochs are in sync between syncer and executor

---- for later, won't be included in this pr (ref #2797 (comment))

  • improve offline capabilities (if sequencer goes offline, full node become fully based)
    • when sequencer comes back only, needs to fetch if any block was produced :/

@github-actions
Copy link
Contributor

github-actions bot commented Nov 1, 2025

The latest Buf updates on your PR. Results from workflow CI and Release / buf-check (pull_request).

BuildFormatLintBreakingUpdated (UTC)
✅ passed⏩ skipped✅ passed✅ passedNov 10, 2025, 3:58 PM

@claude
Copy link
Contributor

claude bot commented Nov 1, 2025

Claude finished @julienrbrt's task —— View job


Comprehensive Code Review Complete

I've completed a thorough review of the forced inclusion mechanism implementation. This is an impressive feature with strong architectural decisions. Below are my findings organized by focus area.


Executive Summary

Overall Assessment: This is a well-designed implementation of a critical censorship-resistance feature. The code demonstrates good Go practices, thoughtful error handling, and comprehensive testing. However, there are several areas requiring attention before production deployment, particularly around concurrency safety, resource management, and handling edge cases.

Key Strengths:

  • Clear separation of concerns with modular architecture
  • Comprehensive documentation (ADR-019 is excellent)
  • Good test coverage for core functionality
  • Thoughtful handling of "smoothing" (transaction deferral within epochs)
  • Two-tier size validation prevents DoS attacks

Critical Issues: 2 high-priority, 5 medium-priority
Recommendations: 11 improvements identified


1. Code Quality & Architecture ⭐⭐⭐⭐

Strengths

  • Clean interfaces: ForcedInclusionRetriever interface is well-defined and properly abstracted
  • Modular design: Clear separation between DA retrieval, sequencer logic, and syncer verification
  • Idiomatic Go: Good use of atomic.Uint64, sync.Map, and proper mutex usage
  • Epoch-based design: Elegant solution to reduce DA query frequency

Issues

🔴 CRITICAL: Non-deterministic timestamp in based sequencer

Location: sequencers/based/sequencer.go:127

Timestamp: time.Time{}, // TODO(@julienrbrt): we need to use DA block timestamp for determinism

Impact: This breaks consensus determinism. All nodes must produce identical blocks.

Recommendation:

// Use DA block timestamp from the forced inclusion event
Timestamp: forcedTxsEvent.Timestamp, // Fetch from DA block metadata

This is already noted as a TODO but is blocking for production use. The DA layer should provide block timestamps.

🟡 MEDIUM: Memory leak risk in based sequencer txQueue

Location: sequencers/based/sequencer.go:36,111

txQueue  [][]byte // Unbounded slice

Issue: The txQueue can grow without bounds if DA epochs contain more transactions than can fit in blocks. No backpressure mechanism exists.

Recommendation: Add a maximum queue size with metrics:

const MaxTxQueueSize = 10000

if len(s.txQueue) > MaxTxQueueSize {
    s.logger.Warn().Msg("Transaction queue full, dropping oldest transactions")
    s.txQueue = s.txQueue[len(s.txQueue)-MaxTxQueueSize:]
}

🟡 MEDIUM: pendingForcedInclusionTxs not persisted

Location: sequencers/single/sequencer.go:56,81

Issue: In-memory pending queue is lost on restart. This could cause valid forced inclusion transactions to be forgotten.

Recommendation:

  1. Persist pending queue to datastore on updates
  2. Restore from datastore on startup
  3. This is mentioned as a follow-up task but should be prioritized
// Add persistence layer
func (s *Sequencer) persistPendingTxs() error {
    data, err := json.Marshal(s.pendingForcedInclusionTxs)
    if err != nil {
        return err
    }
    return s.db.Put(pendingTxsKey, data)
}

🟢 MINOR: Epoch calculation could be optimized

Location: block/internal/da/forced_inclusion_retriever.go:54

epochStart, epochEnd, currentEpochNumber := types.CalculateEpochBoundaries(daHeight, r.genesis.DAStartHeight, r.daEpochSize)

Issue: CalculateEpochBoundaries recalculates the epoch number internally, but the retriever doesn't use it. Minor inefficiency.

Recommendation: Return epoch number from line 54 if needed elsewhere, or accept that this minor recalculation is acceptable for code clarity.


2. Error Handling & Resilience ⭐⭐⭐⭐

Strengths

  • Graceful degradation: System continues when forced inclusion not configured
  • Proper error wrapping: Good use of %w and errors.Join
  • Height from future handling: Correctly backs off when DA height unavailable
  • Retry logic: executeTxsWithRetry in syncer properly handles transient failures

Issues

🟡 MEDIUM: Error handling inconsistency in forced inclusion retrieval

Location: block/internal/da/forced_inclusion_retriever.go:119-134

// any error during process, need to retry at next call
if processErrs != nil {
    r.logger.Warn().
        // ...
        Msg("Failed to retrieve DA epoch.. retrying next iteration")

    return &ForcedInclusionEvent{
        StartDaHeight: daHeight,
        EndDaHeight:   daHeight,
        Txs:           [][]byte{},
    }, nil // Returns nil error even though processErrs != nil
}

Issue: Swallows errors silently. Caller cannot distinguish between "no forced txs" and "retrieval failed".

Recommendation: Return the error or add a status field to ForcedInclusionEvent:

type ForcedInclusionEvent struct {
    StartDaHeight uint64
    EndDaHeight   uint64
    Txs           [][]byte
    Partial       bool // Indicates incomplete retrieval
}

🟢 MINOR: Context cancellation not checked in loops

Location: block/internal/da/forced_inclusion_retriever.go:106-111

for epochHeight := epochStart + 1; epochHeight < epochEnd; epochHeight++ {
    result := r.client.RetrieveForcedInclusion(ctx, epochHeight)
    // ...
}

Recommendation: Add context check for long-running epoch scans:

for epochHeight := epochStart + 1; epochHeight < epochEnd; epochHeight++ {
    select {
    case <-ctx.Done():
        return nil, ctx.Err()
    default:
    }
    // ...
}

3. Security ⭐⭐⭐⭐⭐

Strengths

  • Malicious sequencer detection: Syncer properly detects and rejects blocks missing forced inclusion transactions
  • Two-tier size validation: Prevents both DA-level and batch-level size attacks
  • No timing dependencies: Epoch boundaries are deterministic, preventing timing attacks
  • Transaction hashing: Uses SHA256 for transaction verification (good choice)

Issues

🟢 MINOR: No rate limiting on forced inclusion namespace

Location: Architecture level

Issue: While DA layer has native spam protection, there's no application-level rate limiting or prioritization for forced inclusion transactions.

Recommendation: Consider future enhancement to add:

  • Per-user rate limiting (by DA address)
  • Priority queue (older forced txs processed first)
  • Gas price requirements for forced inclusion

This is not blocking for initial release but should be documented as a known limitation.

🟢 MINOR: Hash collision risk (theoretical)

Location: block/internal/syncing/syncer.go:675-678

func hashTx(tx []byte) string {
    hash := sha256.Sum256(tx)
    return hex.EncodeToString(hash[:])
}

Issue: Using SHA256 is fine for transaction hashing, but the implementation doesn't handle the theoretical collision risk in the blockTxMap.

Recommendation: Current implementation is acceptable. SHA256 collision resistance is sufficient for this use case. Document that this is by design.


4. Performance & Resource Efficiency ⭐⭐⭐⭐

Strengths

  • Epoch-based fetching: Excellent optimization to reduce DA queries
  • Atomic operations: Proper use of atomic.Uint64 for DA height tracking
  • Efficient map operations: Good use of maps for O(1) transaction lookups
  • Parallel DA submissions: Executor uses errgroup for concurrent operations

Issues

🟡 MEDIUM: Sequential DA fetching within epochs

Location: block/internal/da/forced_inclusion_retriever.go:106-117

for epochHeight := epochStart + 1; epochHeight < epochEnd; epochHeight++ {
    result := r.client.RetrieveForcedInclusion(ctx, epochHeight)
    err = r.processForcedInclusionBlobs(event, result, epochHeight)
    processErrs = errors.Join(processErrs, err)
}

Issue: DA fetches are sequential, not parallel. For large epochs, this adds latency.

Recommendation: Use errgroup for parallel fetching:

g, ctx := errgroup.WithContext(ctx)
results := make([]coreda.ResultRetrieve, epochEnd-epochStart+1)

for i, height := range heightRange {
    i, height := i, height // Capture loop variables
    g.Go(func() error {
        results[i] = r.client.RetrieveForcedInclusion(ctx, height)
        return nil
    })
}
g.Wait()

Note: This is mentioned as a follow-up in PR comments (#2842), which is good planning.

🟢 MINOR: Redundant DA height loading

Location: block/internal/syncing/syncer.go:223

s.daRetrieverHeight.Store(max(s.genesis.DAStartHeight, s.cache.DaHeight(), state.DAHeight, s.getHighestStoredDAHeight()))

Issue: getHighestStoredDAHeight() is called on every startup and does multiple DB reads.

Recommendation: Cache the result or only compute when necessary. Consider adding a flag to skip this check after initial startup.


5. Testing & Reliability ⭐⭐⭐⭐

Strengths

  • Good unit test coverage: Core components have solid unit tests
  • Table-driven tests: Epoch calculation tests use proper table-driven approach
  • Mock-based testing: Good use of mockery for isolation
  • Integration tests: Syncer forced inclusion tests validate end-to-end flow

Issues

🟡 MEDIUM: Missing edge case tests

Location: Test coverage gaps

Missing test scenarios:

  1. Epoch boundary transitions: Test sequencer behavior when epoch changes mid-batch
  2. Concurrent DA height updates: Race conditions in SetDAHeight/GetDAHeight
  3. Large epoch sizes: Performance tests with 1000+ DA blocks per epoch
  4. Network partition scenarios: Sequencer behavior when DA unavailable for extended period
  5. Restart scenarios: Sequencer restart with pending forced inclusion transactions

Recommendation: Add these test cases before production deployment:

func TestSequencer_EpochBoundaryTransition(t *testing.T) {
    // Test when GetNextBatch is called at exact epoch boundary
}

func TestSequencer_ConcurrentDAHeightAccess(t *testing.T) {
    // Use race detector to verify atomic access patterns
}

🟢 MINOR: Test cleanup

Location: Various test files

Some tests don't properly clean up resources (contexts, goroutines). Run tests with -race flag to verify.


6. Documentation & Developer Experience ⭐⭐⭐⭐⭐

Strengths

  • Excellent ADR: ADR-019 is comprehensive and well-structured
  • Clear code comments: Complex logic is well-documented
  • Configuration examples: Good examples for both sequencer modes
  • User-facing docs: New based sequencer documentation is clear

Issues

🟢 MINOR: Missing operational guidance

Location: Documentation

Recommendation: Add operational runbook covering:

  1. How to monitor forced inclusion namespace usage
  2. Alert thresholds for pending transaction queues
  3. Recovery procedures when sequencer detected as malicious
  4. Recommended epoch sizes for different DA layer configurations

Example addition to docs:

## Operational Monitoring

### Metrics to Monitor
- `forced_inclusion_pending_count`: Should stay < 100 under normal operation
- `forced_inclusion_epoch_size_bytes`: Should not consistently exceed 50% of MaxBytes
- `forced_inclusion_retrieval_errors`: Should be 0 under normal operation

### Alert Thresholds
- WARN: pending_count > 100 for > 5 minutes
- CRITICAL: pending_count > 1000 or retrieval_errors > 10/hour

7. Concurrency & Race Conditions ⭐⭐⭐⭐

Strengths

  • Atomic DA height: Properly uses atomic.Uint64
  • sync.Map for pending txs: Correct choice for concurrent read/write in syncer
  • Channel-based coordination: Good use of channels in syncer workers

Issues

🟡 MEDIUM: Potential race in pendingForcedInclusionTxs (single sequencer)

Location: sequencers/single/sequencer.go:56,342

type Sequencer struct {
    // ...
    pendingForcedInclusionTxs []pendingForcedInclusionTx // Not protected by mutex
}

func (s *Sequencer) processForcedInclusionTxs(...) {
    // Reads and writes to pendingForcedInclusionTxs without synchronization
    c.pendingForcedInclusionTxs = newPendingTxs
}

Issue: If GetNextBatch is called concurrently (which shouldn't happen but isn't enforced), race condition exists.

Recommendation: Either:

  1. Document that GetNextBatch must not be called concurrently, OR
  2. Add mutex protection:
type Sequencer struct {
    mu                        sync.Mutex
    pendingForcedInclusionTxs []pendingForcedInclusionTx
}

Based on the architecture, option 1 is likely sufficient, but it should be documented.


8. Configuration & Deployment ⭐⭐⭐⭐

Strengths

  • Genesis parameter: DAEpochForcedInclusion properly placed in genesis (consensus-critical)
  • Optional feature: Graceful handling when forced inclusion not configured
  • Clear configuration examples: Both YAML examples are helpful

Issues

🟢 MINOR: No validation for epoch size = 0

Location: types/epoch.go:18-20

if daEpochSize == 0 {
    return 1 // All blocks in epoch 1
}

Issue: Epoch size of 0 is a special case but isn't validated at config parsing time. Could lead to confusion.

Recommendation: Add validation in genesis loading:

func (g Genesis) Validate() error {
    if g.DAEpochForcedInclusion == 0 {
        return errors.New("DAEpochForcedInclusion must be >= 1")
    }
    // ...
}

Summary of Recommendations

Must Fix Before Production

  1. Based sequencer timestamp (already has TODO, just needs implementation)
  2. ⚠️ Persist pending forced inclusion txs (data loss risk on restart)
  3. ⚠️ Add bounds checking to based sequencer txQueue (memory leak risk)

Should Fix Soon

  1. Improve error handling in forced inclusion retriever (don't swallow errors)
  2. Add missing edge case tests (epoch boundaries, concurrency)
  3. Document concurrency expectations for GetNextBatch
  4. Add parallel DA fetching within epochs (follow-up PR feat: async force inclusion fetching #2842 noted)

Nice to Have

  1. Add operational monitoring guidance
  2. Add context cancellation checks in DA retrieval loops
  3. Validate genesis epoch size > 0
  4. Consider future rate limiting for forced inclusion namespace

Conclusion

This is a well-engineered feature that demonstrates strong architectural thinking and good Go practices. The forced inclusion mechanism provides genuine censorship resistance while maintaining performance through clever epoch-based fetching.

The implementation is nearly production-ready with a few important fixes needed:

  • The timestamp determinism issue must be resolved
  • Pending transaction persistence should be added
  • Resource bounds should be enforced

Once these issues are addressed and the follow-up tasks completed (async fetching, smoothing refinements), this will be a robust and valuable feature for the ev-node framework.

Recommendation: Address the critical timestamp issue and add persistence for pending transactions, then merge. The remaining improvements can be handled in follow-up PRs as already planned by the team.

Great work on this complex feature! 👏


@julienrbrt julienrbrt changed the title [WIP] feat: forced inclusion for executor feat: forced inclusion Nov 5, 2025
Copy link
Contributor

@alpe alpe left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Very nice start!
Can you elaborate why you decided on a separate namespace for the force inclusion TX? The consumers have to read both ns anyway to stay up to date


event.StartDaHeight = epochHeight
event.Txs = append(event.Txs, result.Data...)
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We need to prepare for malicious content. let's exit the loop early when a tx size threshold is reached. This can be a multiple of common.DefaultMaxBlobSize used by the executor

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Makes sense for the height check yes!. However i was thinking of doing no other checks and let the execution client deal with gibberish data (this is why i added that as requirement in the execution interface description)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we want to keep raw TX data in the namespace, there is not much we can do here to validate, indeed. A size check is an easy win but more would require extending the executor interface for a checkTX.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i agree, and this actually may be required to avoid congestions issue and losing txs.

@julienrbrt
Copy link
Member Author

Can you elaborate why you decided on a separate namespace for the force inclusion TX? The consumers have to read both ns anyway to stay up to date

This was a suggestion. Personally I think it makes sense, as we are filtering what's coming up in that namespace at fetching level directly in ev-node. What is posted in the force included namespace is handled directly by the execution client. ev-node only pass down bytes.

@julienrbrt julienrbrt marked this pull request as ready for review November 6, 2025 20:46
@julienrbrt julienrbrt marked this pull request as draft November 6, 2025 20:47
@github-actions
Copy link
Contributor

github-actions bot commented Nov 10, 2025

PR Preview Action v1.6.3

🚀 View preview at
https://evstack.github.io/docs-preview/pr-2797/

Built to branch main at 2025-12-02 16:45 UTC.
Preview will be ready when the GitHub Pages deployment is complete.

@codecov
Copy link

codecov bot commented Nov 10, 2025

Codecov Report

❌ Patch coverage is 79.88506% with 105 lines in your changes missing coverage. Please review.
✅ Project coverage is 65.43%. Comparing base (f1e677d) to head (9b478fa).
⚠️ Report is 1 commits behind head on main.

Files with missing lines Patch % Lines
sequencers/single/sequencer.go 76.69% 24 Missing and 7 partials ⚠️
block/internal/da/forced_inclusion_retriever.go 78.04% 16 Missing and 2 partials ⚠️
block/internal/executing/executor.go 43.33% 11 Missing and 6 partials ⚠️
sequencers/based/sequencer.go 82.27% 12 Missing and 2 partials ⚠️
block/internal/syncing/syncer.go 90.72% 8 Missing and 1 partial ⚠️
block/components.go 0.00% 6 Missing and 1 partial ⚠️
core/sequencer/dummy.go 0.00% 3 Missing ⚠️
block/public.go 77.77% 2 Missing ⚠️
pkg/config/config.go 81.81% 1 Missing and 1 partial ⚠️
pkg/genesis/genesis.go 75.00% 1 Missing and 1 partial ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##             main    #2797      +/-   ##
==========================================
+ Coverage   64.53%   65.43%   +0.90%     
==========================================
  Files          81       85       +4     
  Lines        7370     7838     +468     
==========================================
+ Hits         4756     5129     +373     
- Misses       2072     2151      +79     
- Partials      542      558      +16     
Flag Coverage Δ
combined 65.43% <79.88%> (+0.90%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@julienrbrt julienrbrt marked this pull request as ready for review November 10, 2025 16:14
@github-actions
Copy link
Contributor

github-actions bot commented Nov 10, 2025

The latest Buf updates on your PR. Results from workflow CI / buf-check (pull_request).

BuildFormatLintBreakingUpdated (UTC)
✅ passed⏩ skipped✅ passed✅ passedDec 3, 2025, 2:48 PM

@julienrbrt
Copy link
Member Author

List of improvements to do in follow-ups:

  1. Improve DA fetching by parallelizing epoch fetching
  2. Simplify DA requests after [EPIC] Remove DA Interface #2796. Fetch DA latest height, instead of checking epoch boundaries
  3. Solve edge case where proposer misses blocks and comes back online with forced included blocks published

@julienrbrt julienrbrt marked this pull request as draft November 10, 2025 16:19
@julienrbrt
Copy link
Member Author

julienrbrt commented Nov 11, 2025

We discussed the above in the standup (#2797 (comment)), and a few ideas came.

1 - 2 . When making the call async, we need to make sure the executor and full node stay insync with an epoch. This can be done easily by making an epoch a few blocks behind the actual DA height.

  • We need to make sure all heights of that epoch are available when we fetch the epoch (there is already code for this)
  • We need to scale that block window based on an average fetching time (the higher the da epoch is, the higher the window is)
  1. We can re-use some code from [WIP] HA failover #2814 to automate node restarting (syncing -> base sequencer)
    • When the sequencer comes back online and missed an epoch, it needs to sync up until the head of the da layer
    • Based sequencers must check the forced included transaction namespace (@julienrbrt -- I picked this solution, otherwise it would need to fetch 2 namespaces instead of 1. alternative is to have the sequencer fetch only at the end of the epoch the header namespace) for a synced checkpoint from the da layer, and restart as sync node if it was found.

@julienrbrt julienrbrt marked this pull request as ready for review November 11, 2025 16:29
@julienrbrt julienrbrt marked this pull request as draft November 11, 2025 16:58
Copy link
Contributor

@alpe alpe left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for answering all my questions and comments.
There is still the todo in the code to store unprocessed direct TX when the max block size is reached.


event.StartDaHeight = epochHeight
event.Txs = append(event.Txs, result.Data...)
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we want to keep raw TX data in the namespace, there is not much we can do here to validate, indeed. A size check is an easy win but more would require extending the executor interface for a checkTX.

julienrbrt added a commit that referenced this pull request Nov 13, 2025
we decided to remove the sequencer go.mod, as ev-node can provide
directly the sequencer implementation (sequencers/single was already
depending on ev-node anyway)

this means no go.mod need to be added for the new based sequencers in
#2797
@julienrbrt julienrbrt marked this pull request as ready for review November 13, 2025 10:58
@julienrbrt
Copy link
Member Author

Once is PR is merged, we should directly after:

In the meantime, I have disabled the feature so it can be merged (0d790ef)

@julienrbrt
Copy link
Member Author

FYI the upgrade test will fail until tastora is updated.


return &coresequencer.GetNextBatchResponse{
Batch: batch,
Timestamp: time.Now(),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is not deterministic for all nodes

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this isn't really an issue, as every node is the sequencer.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This timestamp is used for the headerTime of the next block. This will lead to a different hash for the block. The other thing is that app logic on the chain may use this value in their decision tree or store it. State could diverge on the nodes which makes it hard to recover later

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see. Then we need to use the time of the day block, as the block producing time of a based sequencer can never be in sync across all nodes.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see. Then we need to use the time of the day block, as the block producing time of a based sequencer can never be in sync across all nodes.

@julienrbrt
Copy link
Member Author

julienrbrt commented Dec 1, 2025

Some of the changes were going to be tackled as follow-ups (the congestion issue, async fetching, commands) as it was getting hard to review this. This is why the feature cannot be enabled yet. There's still code missing in the execution client as well to get it all working.

I'll check the other comments.

@julienrbrt
Copy link
Member Author

To recap everything that needs to happen in follow ups:

Most of them are small and contained.

Copy link
Contributor

@alpe alpe left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for your comments and follow up task list.
Let's bring this into main when CI is happy again

@julienrbrt julienrbrt requested review from alpe and tac0turtle December 3, 2025 13:06
@julienrbrt julienrbrt added this pull request to the merge queue Dec 3, 2025
Merged via the queue into main with commit 5ee785f Dec 3, 2025
28 of 31 checks passed
@julienrbrt julienrbrt deleted the julien/fi branch December 3, 2025 17:19
@julienrbrt julienrbrt mentioned this pull request Dec 3, 2025
8 tasks
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants