-
Notifications
You must be signed in to change notification settings - Fork 1.2k
Llamafiler HTTP server stability/reliability fixes #767
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
Previously, when all worker threads were busy, the code would forcibly cancel the oldest active connection to make room for a new one. This approach: * Prematurely terminates in-flight requests, leading to broken or truncated responses. * Forces cleanup of file descriptors mid-stream, causing spurious "Illegal seek"/EBADF errors. * Circumvents the TCP backlog queuing provided by the OS, instead dropping live clients and degrading user experience. By deleting this block, we let the kernel's listen backlog naturally queue incoming connections until a worker becomes available. As a result: * Active requests are no longer killed arbitrarily under load. * File descriptors aren't closed unexpectedly, eliminating related "static asset pread" failures. * The server benefits from standard TCP connection handling without manual interference. This change improves reliability under high concurrency and avoids unintended side effects from thread cancellation.
The current send_binary() implementation treats any write() that returns less than the requested byte count as a failure, immediately setting close_connection_ = true and returning false. This is incorrect behavior. Per POSIX, write() may legitimately return fewer bytes than requested when: - The socket send buffer is nearly full - Network congestion causes backpressure - Large write sizes exceed kernel buffers - System is under memory pressure These partial writes are normal, not error conditions. The current code incorrectly drops active connections during normal operation, especially under load when partial writes become more common. This commit replaces the single write() call with a proper write loop that: - Accumulates partial writes until all data is sent - Retries on EAGAIN/EWOULDBLOCK without closing the connection - Only treats actual errors (not partial writes) as failures - Maintains the existing error logging behavior This fix prevents spurious connection drops during large responses or when the network is congested, significantly improving reliability under production load. Fixes: Connection drops during static file serving Fixes: Increased failure rate under high load
The 512-byte buffer size results in excessive system calls when serving files. For a 1MB file, this requires 2048 read/write operations. Using 16KB reduces system call overhead by 32x and better matches typical OS page sizes and network buffer defaults. This should improve throughput for static file serving while maintaining reasonable stack usage.
- Fix URL prefix normalization to handle double slashes (fixes mozilla-ai#787) - Consolidates consecutive slashes (// -> /) - Ensures prefix starts with single slash - Removes trailing slash (except for root) - Matches old server behavior exactly - Fix .args file loading order (fixes mozilla-ai#783) - Load .args before determining program type - Allows --server --v2 flags in .args to work correctly - Fix connection stability issues (addresses mozilla-ai#767) - Remove aggressive client cancellation when workers are busy - Let TCP backlog handle connection queuing naturally - Fix partial write handling with simple retry logic - Increase file transfer buffer from 512B to 16KB These minimal changes make Server v2 production-ready while maintaining backward compatibility. All fixes follow existing patterns from the old server implementation.
- Add FLAG_http_write_timeout (default 60s) for configurable write timeouts - Replace busy-loop with poll() when socket writes return EAGAIN/EWOULDBLOCK - Add EINTR handling for signal-interrupted system calls - Fix partial write handling in safe_writev() across multiple iovecs - Document new flag in man page
|
Commit 521f7b7 addresses remaining issues with socket write handling that weren't fully solved by the previous partial write fix (item 2). Problems addressed:
|
Previously, when all worker threads were busy, the code would forcibly cancel the oldest active connection to make room for a new one. This approach:
By deleting this block, we let the kernel's listen backlog naturally queue incoming connections until a worker becomes available. As a result:
This change improves reliability under high concurrency and avoids unintended side effects from thread cancellation.
The current send_binary() implementation treats any write() that returns
less than the requested byte count as a failure, immediately setting
close_connection_ = true and returning false.
This is incorrect behavior. Per POSIX, write() may legitimately return
fewer bytes than requested when:
These partial writes are normal, not error conditions. The current code
incorrectly drops active connections during normal operation, especially
under load when partial writes become more common.
This commit replaces the single write() call with a proper write loop
that:
This fix prevents spurious connection drops during large responses or
when the network is congested, significantly improving reliability under
production load.
Fixes: Connection drops during static file serving
Fixes: Increased failure rate under high load
The 512-byte buffer size results in excessive system calls when serving
files. For a 1MB file, this requires 2048 read/write operations.
Using 16KB reduces system call overhead by 32x and better matches typical
OS page sizes and network buffer defaults. This should improve throughput
for static file serving while maintaining reasonable stack usage.