Skip to content

Commit f8bc753

Browse files
authored
S3+MySQL load test numbers (#494)
* s3mysql load tests * add date precision * address comments, update URL
1 parent 63793ab commit f8bc753

File tree

1 file changed

+47
-19
lines changed

1 file changed

+47
-19
lines changed

docs/performance.md

Lines changed: 47 additions & 19 deletions
Original file line numberDiff line numberDiff line change
@@ -7,16 +7,18 @@ deterministic synthetic certificates for a limited amount of time. QPS was
77
measured using the average values collected over the test period.
88

99
> [!NOTE]
10-
> These are not definitive numbers, and that more tests are to come with an
11-
> improved codebase.
10+
> These are not definitive numbers, and performance might evolve as we improve TesseraCT.
11+
> These load tests should be considered as a snapshot of how TesseraCT performed
12+
> at a point in time. Do not hesitate to run such tests with your own infrastructure.
1213
1314
## Index
1415

1516
* [GCP](#gcp)
1617
* [AWS](#aws)
1718
* [POSIX](#posix)
18-
+ [NVMe SSD](#nvme)
19-
+ [SAS HDD](#sas-hdd)
19+
* [NVMe SSD](#nvme)
20+
* [SAS HDD](#sas-hdd)
21+
* [S3+MySQL](#s3--mysql)
2022

2123
## Backends
2224

@@ -42,7 +44,7 @@ The table below shows the measured performance over 12 hours in each instance ty
4244

4345
##### Free Tier e2-micro VM Instance + Cloud Spanner 100 PUs
4446

45-
- e2-micro (2 vCPUs, 1 GB Memory)
47+
* e2-micro (2 vCPUs, 1 GB Memory)
4648

4749
The write QPS is around 60. The bottleneck comes from the VM CPU usage which is
4850
always above 90%. The Cloud Spanner CPU utilization is around 10%.
@@ -80,7 +82,7 @@ MiB Swap: 0.0 total, 0.0 free, 0.0 used. 68.8 avail Mem
8082

8183
##### e2-medium VM Instance + Cloud Spanner 100 PUs
8284

83-
- e2-medium (2 vCPUs, 4 GB Memory)
85+
* e2-medium (2 vCPUs, 4 GB Memory)
8486

8587
The write QPS is around 250. The bottleneck comes from the VM CPU utilization
8688
which is always around 100%. The Cloud Spanner CPU utilization is around 20%.
@@ -118,7 +120,7 @@ MiB Swap: 0.0 total, 0.0 free, 0.0 used. 1502.3 avail Mem
118120

119121
##### e2-standard-2 VM Instance + Cloud Spanner 100 PUs
120122

121-
- e2-standard-2 (2 vCPUs, 8 GB Memory)
123+
* e2-standard-2 (2 vCPUs, 8 GB Memory)
122124

123125
The write QPS is around 600. The bottleneck comes from the VM CPU utilization
124126
which is always around 100%. The Cloud Spanner CPU utilization is around 50%.
@@ -158,8 +160,8 @@ MiB Swap: 0.0 total, 0.0 free, 0.0 used. 5921.5 avail Mem
158160

159161
The following flags are used:
160162

161-
- `--enable_publication_awaiter`
162-
- `--checkpoint_interval=1500ms`
163+
* `--enable_publication_awaiter`
164+
* `--checkpoint_interval=1500ms`
163165

164166
When the publication awaiter is enabled, the write QPS drops to around 500. The
165167
bottleneck comes from the checkpoint publishing wait time. The VM CPU
@@ -202,7 +204,7 @@ The following flags were set on the `tesseract` server:
202204

203205
##### n2-standard-4 Managed Instance x 1 + Cloud Spanner 100 PUs
204206

205-
- n2-standard-4 (4 vCPUs, 16 GB Memory)
207+
* n2-standard-4 (4 vCPUs, 16 GB Memory)
206208

207209
The write QPS was around 1000. The Cloud Spanner utilization was around 55%.
208210
The VM CPU utilization was around 80%.
@@ -229,7 +231,7 @@ The VM CPU utilization was around 80%.
229231

230232
##### n2-standard-4 Managed Instance x 3 + Cloud Spanner 200 PUs
231233

232-
- n2-standard-4 (4 vCPUs, 16 GB Memory)
234+
* n2-standard-4 (4 vCPUs, 16 GB Memory)
233235

234236
The write QPS was around 1700. The Cloud Spanner utilization was around 50%.
235237
The VM CPU utilization was around 50%.
@@ -261,10 +263,10 @@ tool](/internal/hammer/) as of [commit `fe7687c`](https://github.com/transparenc
261263

262264
#### t3a.small EC2 Instance + Aurora MySQL db.r5.large
263265

264-
- t3a.small (2 vCPUs, 2 GB Memory)
265-
- General Purpose SSD (gp3)
266-
- IOPS: 3,000
267-
- Throughput: 125 MiB/s
266+
* t3a.small (2 vCPUs, 2 GB Memory)
267+
* General Purpose SSD (gp3)
268+
* IOPS: 3,000
269+
* Throughput: 125 MiB/s
268270

269271
The write QPS is around 450. The bottleneck comes from the VM CPU utilization
270272
which is always around 100%. The Aurora MySQL CPU utilization is around 30%.
@@ -290,14 +292,14 @@ MiB Swap: 0.0 total, 0.0 free, 0.0 used. 704.2 avail Mem
290292
92354 ec2-user 20 0 2794864 560568 14980 S 182.7 28.7 48:42.28 aws
291293
```
292294

293-
294295
### POSIX
295296

296297
These tests were performed in a NixOS VM under Proxmox running on a local Threadripper PRO 3975WX machine.
297298

298299
The machine has two independent ZFS mirror pools consisting of:
299-
- 2x 6TB SAS (12Gb) HDD
300-
- 2x 1TB NVMe SSD
300+
301+
* 2x 6TB SAS (12Gb) HDD
302+
* 2x 1TB NVMe SSD
301303

302304
The VM was allocated 30 cores and 32 GB of RAM.
303305

@@ -316,7 +318,6 @@ The log and hammer were both run in the same VM, with the log using a ZFS subvol
316318

317319
TesseraCT sustained around 10,000 write qps, using up to 7 cores for the server.
318320

319-
320321
```bash
321322
┌───────────────────────────────────────────────────────────────────────────┐
322323
│Read (8 workers): Current max: 20/s. Oversupply in last second: 0 │
@@ -368,3 +369,30 @@ MiB Swap: 0.0 total, 0.0 free, 0.0 used. 30354.7 avail Mem
368369
272507 al 20 0 24.3g 1.4g 336236 S 97.0 4.3 4:42.73 posix
369370

370371
```
372+
373+
### S3 + MySQL
374+
375+
S3 + MySQL performance number will highly depend on the S3 and MySQL setup. If
376+
you have the opportunity to run load tests with different relevant setups, we'd
377+
love to hear about them: [get in touch](../README.md#wave-contact)!
378+
379+
#### MinIO MariaDB (from IPng Networks)
380+
381+
This test was performed by [IPng Networks](https://ipng.ch/), around 2025-07-26.
382+
More details are available on their [blog post](https://ipng.ch/s/articles/2025/07/26/certificate-transparency-part-1-tesseract/).
383+
Kudos to IPng and Pim!
384+
385+
This test was performed using two Dell R630s running with two Xeon E5-2640 v4
386+
CPUs each, with 20 cores, 40 threads, and 512GB of DDR4 memory.
387+
388+
The log, hammer, MariaDB were running on a single machine. MinIO was running
389+
on a second machine with a SAS controller and 6pcs of 1.92TB enterprise storage
390+
(Samsung part number P1633N19).
391+
392+
TesseraCT sustained 500 write qps for a few hours, with:
393+
394+
* TesseraCT using about 2.9 CPUs/s
395+
* MariaDB using 0.3 CPUs/s
396+
* The hammer using 6.0 CPUs/s
397+
398+
Performance started to degrade around 600 write qps.

0 commit comments

Comments
 (0)