Skip to content

Conversation

@Aditya1404Sal
Copy link
Contributor

@Aditya1404Sal Aditya1404Sal commented Jan 5, 2026

Feature or Problem

This PR adds component-level lifecycle management to the wasmcloud runtime, enabling granular control over individual components within a workload without requiring full workload restarts. Previously, any component update or state change required restarting the entire workload, causing unnecessary downtime for unaffected components.

Key capabilities added:

  • Component-level start/stop operations: Start or stop specific components by name or ID within a running workload
  • Component state tracking: Components now track their lifecycle state (Starting, Running, Stopped, Reconciling, Error, Stopping)
  • Component naming: Optional user-defined names for stable component identification across updates (Kubernetes-style naming)
  • Intelligent component updates: workload_update automatically detects which components changed and updates only those
  • Dependency re-linking: When a component is updated, all dependent components are automatically re-linked in topological order to use the new code

Related Issues

Release Information

next

Consumer Impact

Breaking Changes: None - this is a backwards-compatible addition.

API Additions:

  • WorkloadStartRequest now accepts optional component_ids filter to start specific components
  • WorkloadStopRequest now accepts optional component_ids filter to stop specific components
  • WorkloadUpdateRequest added - takes a Workload spec and updates only changed components
  • WorkloadStatusResponse now includes ComponentInfo list with per-component state (id, name, state, message)
  • Component type now has optional name field for stable identification

Runtime Behavior Changes:

  • Components stopped via component_ids remain in memory with Stopped state (can be restarted quickly)
  • Component updates cascade: if component A depends on B which depends on C, updating C automatically re-links both B and A
  • HTTP handler caches are automatically refreshed when components are re-linked

Testing

Unit Test(s)

None

Acceptance or Integration

Will commit soon

Manual Verification

tested it in an integration test setup, locally

…ndler cache to reload component chain

Signed-off-by: Aditya <[email protected]>
@Aditya1404Sal Aditya1404Sal marked this pull request as ready for review January 6, 2026 19:25
@Aditya1404Sal Aditya1404Sal requested a review from a team as a code owner January 6, 2026 19:25
@Aditya1404Sal Aditya1404Sal marked this pull request as draft January 6, 2026 19:25
@lxfontes lxfontes moved this to Discuss in wash v2 release Jan 7, 2026
@lxfontes
Copy link
Member

lxfontes commented Jan 7, 2026

I don't think we'd want this behaviour at the protobuf / clustered level. maybe at the wash engine level only ⚖️ ( for embedding / wash dev ).

Following Kubernetes / distributed-systems principles, "Restarts" are "Redeploys" by design for a few key reasons:

  • Immutability: Like Kubernetes Pods, Workloads are immutable
  • Orchestration: The host is never alone (besides embedding), there is always a Cluster Orchestrator taking care of moving workloads around
  • Abstraction: It's ok to schedule Workloads, but users are better served using WorkloadReplicaSet or WorkloadDeployment which take into consideration fault domains

This "update-in-place" behavior is present in wasmcloud v1 + wadm and caused many outages / deploy issues and latency due to extra wrpc/nats layer.

@Mees-Molenaar
Copy link

I agree with @lxfontes here.

However, what if you have a workload consisting of components A and B. Component A changes a lot, so you have to redeploy your workload. But component B does not change a lot, which gives an overhead of redeploying component B as well.

Would it then be wise to separate the components into two workloads and let component A and B communicate through wasmcloud/messaging?

@Aditya1404Sal Aditya1404Sal force-pushed the component-reload branch 2 times, most recently from 646481e to 5604816 Compare January 9, 2026 19:22
…l component updates

Introduce synchronization primitives and test coverage so component updates wait for in-flight requests to drain and queued requests are held until the component returns to Running.

Signed-off-by: Aditya <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

Status: Discuss

Development

Successfully merging this pull request may close these issues.

3 participants