Skip to content
Open
Show file tree
Hide file tree
Changes from 2 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
22 changes: 22 additions & 0 deletions rust/docker-compose.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,22 @@
version: '3.8'
services:
postgres:
image: postgres:15
environment:
POSTGRES_DB: postgres
POSTGRES_USER: postgres
POSTGRES_PASSWORD: postgres
volumes:
- postgres-data:/var/lib/postgresql/data
- ./impls/src/postgres/sql/v0_create_vss_db.sql:/docker-entrypoint-initdb.d/init.sql
ports:
- "5432:5432"
networks:
- app-network

volumes:
postgres-data:

networks:
app-network:
driver: bridge
17 changes: 14 additions & 3 deletions rust/server/src/vss_service.rs
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
use http_body_util::{BodyExt, Full};
use http_body_util::{BodyExt, Full, Limited};
use hyper::body::{Bytes, Incoming};
use hyper::service::Service;
use hyper::{Request, Response, StatusCode};
Expand All @@ -18,6 +18,8 @@ use std::future::Future;
use std::pin::Pin;
use std::sync::Arc;

const MAXIMUM_REQUEST_BODY_SIZE: u16 = 65_535;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This seems very conservative, given that monitors could get quite large. Given that the VSS service is actually a storage service, it also might make sense to make this configurable (in contrast to lightningdevkit/ldk-server#80, but even there we set the limit to 10MB).

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As mentioned elsewhere: I guess a static upper bound is a good first step, but if we're really concerned about DoS we might need some dynamic rate limiting on a per-IP basis. Although then the question becomes how much of that should be considered the concern of the VSS service itself, and how much we'd just expect users to slap a load balancer/Cloudflare in front of the service to handle that for them

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've made the configuration changes suggested here, capping at 20 MB for the maximum size.


#[derive(Clone)]
pub struct VssService {
store: Arc<dyn KvStore>,
Expand Down Expand Up @@ -110,8 +112,17 @@ async fn handle_request<
Ok(auth_response) => auth_response.user_token,
Err(e) => return Ok(build_error_response(e)),
};
// TODO: we should bound the amount of data we read to avoid allocating too much memory.
let bytes = body.collect().await?.to_bytes();

let limited_body = Limited::new(body, MAXIMUM_REQUEST_BODY_SIZE.into());
let bytes = match limited_body.collect().await {
Ok(body) => body.to_bytes(),
Err(_) => {
return Ok(Response::builder()
.status(StatusCode::PAYLOAD_TOO_LARGE)
.body(Full::new(Bytes::from("Request body too large")))
.unwrap());
},
};
match T::decode(bytes) {
Ok(request) => match handler(store.clone(), user_token, request).await {
Ok(response) => Ok(Response::builder()
Expand Down