Affected component
channel
Severity
S3 - minor issue
Current behavior
When using a slow local LLM (e.g., LocalAI) that does not respond within ~5 seconds, the model request is canceled.
Inspecting the code for the handle_nextcloud_talk_webhook function in crates/zeroclaw-gateway/src/lib.rs shows that it makes a synchronous call to the LLM implementation.
Expected behavior
The webhook should return immediately, and the message should be processed asynchronously in the background.
Steps to reproduce
1. Run a slow LLM
2. Send a message via Nextcloud Talk
3. LLM logs: context canceled
4. ZeroClaw logs: no record of the cancellation
Impact
ZeroClaw aborts any Nextcloud Talk interaction that takes longer than approximately 5 seconds.
Logs / stack traces
ZeroClaw version
zeroclaw 0.7.3 / commit 0cd9d3b
Rust version
rustc 1.95.0 (59807616e 2026-04-14)
Operating system
Debian GNU/Linux 13 (trixie)
Regression?
No, first-time setup
Pre-flight checks
Affected component
channel
Severity
S3 - minor issue
Current behavior
When using a slow local LLM (e.g., LocalAI) that does not respond within ~5 seconds, the model request is canceled.
Inspecting the code for the handle_nextcloud_talk_webhook function in crates/zeroclaw-gateway/src/lib.rs shows that it makes a synchronous call to the LLM implementation.
Expected behavior
The webhook should return immediately, and the message should be processed asynchronously in the background.
Steps to reproduce
Impact
ZeroClaw aborts any Nextcloud Talk interaction that takes longer than approximately 5 seconds.
Logs / stack traces
ZeroClaw version
zeroclaw 0.7.3 / commit 0cd9d3b
Rust version
rustc 1.95.0 (59807616e 2026-04-14)
Operating system
Debian GNU/Linux 13 (trixie)
Regression?
No, first-time setup
Pre-flight checks