Is this performance expected of Elysia? #1132
-
I have 3 replica pods with 1 vCPU and 1 GB memory hosting an Elysia server. The server performs a trivial task as shown below. import swagger from '@elysiajs/swagger'
import { Elysia, t } from 'elysia'
new Elysia()
.use(swagger())
.get(
'/health',
({ request, query }) => {
request.headers.delete('host')
return {
endpoint: query.endpoint,
proxy: query.proxy,
disable_proxy: query.disable_proxy,
}
},
{
query: t.Object({
endpoint: t.String({
error: 'The `endpoint` query parameter is missing!',
}),
proxy: t.Optional(
t.String({
default: Bun.env['HTTP_PROXY'] || Bun.env['HTTPS_PROXY'],
}),
),
disable_proxy: t.Optional(t.Boolean()),
}),
},
) Running 8 threads and 400 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 91.35ms 94.45ms 1.40s 91.11%
Req/Sec 632.72 136.92 5.12k 85.27%
Latency Distribution
50% 95.00ms
75% 102.10ms
90% 141.46ms
99% 466.25ms
151580 requests in 30.07s, 41.69MB read
Non-2xx or 3xx responses: 16
Requests/sec: 5040.83
Transfer/sec: 1.39MB ~5000 RPS and a P99 of 466 ms for such a trivial task seems to be bad. For reference, my Granian server has a P99 of 100 ms to serve its entire Swagger documentation (which is arguably more work). Do let me know if this is what you are seeing on your side as well. |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 1 reply
-
What are you benchmarking here? The underlying Bun server vs a Rust HTTP server? Well it's obvious that Rust will likely be faster. For trivial tasks you're barely triggering any Javascript or Python code. It won't give you an indication of what the real world usage looks like.
i.e. make some assumptions instead of actually benchmarking and comparing? At the end of the day, P90 looks fine. It's hard to tell without more information (e.g. metrics). It could be a bug in Bun, Elysia or something else. |
Beta Was this translation helpful? Give feedback.
What are you benchmarking here? The underlying Bun server vs a Rust HTTP server? Well it's obvious that Rust will likely be faster.
For trivial tasks you're barely triggering any Javascript or Python code. It won't give you an indication of what the real world usage looks like.
i.e. make some assumptions instead of actually benchmarking and comparing?
At the end of the day, P90 looks fine. It's hard to tell without more information (e.g. metrics). It could be a bug in Bun, Elysia or something else.