-
-
Notifications
You must be signed in to change notification settings - Fork 1.6k
Slow reading of small chunks #2135
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
What versions are you using (specifically of hyper and tokio)? |
hyper 0.13.2 and tokio 0.2.11 |
Hm, interesting! I assume the server is just sending a never-ending chunked stream? What size are the chunks? |
Yes. 1024 bytes. https://github.com/alex/http-client-bench/blob/master/server.go |
Ah ok, so my guess is that slow down is due to the size of the chunks. hyper is yielding each chunk of 1kb, even though it has read multiple from the socket. I suspect if you were to increase the chunk size of the server to something bigger like 8 or 16kb (or even bigger), the performance should even out. |
You're right. Hyper does perform a lot better, if the server sends larger chunks. Other clients also improve but Hyper does catch up to them. 8kb
16kb
32kb
64kb
|
Definitely worth profiling to find where the hot spots are, but I ran into something similar with tokio-postgres and found that it was much more efficient to send "macro blocks" of multiple messages through the channel from the connection to request future and have it split apart in the response stream itself: sfackler/rust-postgres#452 |
There may two steps here:
Go will try to read multiple chunks at a time into the single user buffer, whereas Netty yields each chunk as well. So now I wonder if (1) is significant. |
Also experiencing this issue. I'm getting double the times of my python script for a simple get request. |
While dealing with a performance library in requests (yes, the Python library) I stumbled across psf/requests#2371. I wanted to quickly evaluate whether switching to Rust would make sense for my problem so I created a small benchmark in Rust, matching the ones in https://github.com/alex/http-client-bench:
To my surprise this is a lot slower than the Python clients on my machine:
I've build it with
--release
, timings vary between runs but it's always the same ballpark. I thought at first that maybe it's just writing to stdout that's slow but even if I modify the benchmark to remove writing to stdout it gets faster but remains slower than the competition:I've tried search the documentation for any configuration changes I might be able to make but nothing looked relevant as far as I could see. So... what's going on?
The text was updated successfully, but these errors were encountered: