You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardexpand all lines: .buildkite/nightly-benchmarks/README.md
+18-28
Original file line number
Diff line number
Diff line change
@@ -1,15 +1,13 @@
1
1
# vLLM benchmark suite
2
2
3
-
4
3
## Introduction
5
4
6
5
This directory contains two sets of benchmark for vllm.
6
+
7
7
- Performance benchmark: benchmark vllm's performance under various workload, for **developers** to gain clarity on whether their PR improves/degrades vllm's performance
8
8
- Nightly benchmark: compare vllm's performance against alternatives (tgi, trt-llm and lmdeploy), for **the public** to know when to choose vllm.
9
9
10
-
11
-
See [vLLM performance dashboard](https://perf.vllm.ai) for the latest performance benchmark results and [vLLM GitHub README](https://github.com/vllm-project/vllm/blob/main/README.md) for latest nightly benchmark results.
12
-
10
+
See [vLLM performance dashboard](https://perf.vllm.ai) for the latest performance benchmark results and [vLLM GitHub README](https://github.com/vllm-project/vllm/blob/main/README.md) for latest nightly benchmark results.
13
11
14
12
## Performance benchmark quick overview
15
13
@@ -19,17 +17,14 @@ See [vLLM performance dashboard](https://perf.vllm.ai) for the latest performan
19
17
20
18
**For benchmarking developers**: please try your best to constraint the duration of benchmarking to about 1 hr so that it won't take forever to run.
21
19
22
-
23
20
## Nightly benchmark quick overview
24
21
25
-
**Benchmarking Coverage**: Fix-qps serving on A100 (the support for FP8 benchmark on H100 is coming!) on Llama-3 8B, 70B and Mixtral 8x7B.
22
+
**Benchmarking Coverage**: Fix-qps serving on A100 (the support for FP8 benchmark on H100 is coming!) on Llama-3 8B, 70B and Mixtral 8x7B.
26
23
27
24
**Benchmarking engines**: vllm, TGI, trt-llm and lmdeploy.
28
25
29
26
**Benchmarking Duration**: about 3.5hrs.
30
27
31
-
32
-
33
28
## Trigger the benchmark
34
29
35
30
Performance benchmark will be triggered when:
@@ -39,16 +34,11 @@ Performance benchmark will be triggered when:
39
34
Nightly benchmark will be triggered when:
40
35
- Every commit for those PRs with `perf-benchmarks` label and `nightly-benchmarks` label.
41
36
42
-
43
-
44
-
45
37
## Performance benchmark details
46
38
47
-
48
39
See [performance-benchmarks-descriptions.md](performance-benchmarks-descriptions.md) for detailed descriptions, and use `tests/latency-tests.json`, `tests/throughput-tests.json`, `tests/serving-tests.json` to configure the test cases.
49
40
50
-
51
-
#### Latency test
41
+
### Latency test
52
42
53
43
Here is an example of one test inside `latency-tests.json`:
54
44
@@ -68,23 +58,25 @@ Here is an example of one test inside `latency-tests.json`:
68
58
```
69
59
70
60
In this example:
71
-
- The `test_name` attributes is a unique identifier for the test. In `latency-tests.json`, it must start with `latency_`.
72
-
- The `parameters` attribute control the command line arguments to be used for `benchmark_latency.py`. Note that please use underline `_` instead of the dash `-` when specifying the command line arguments, and `run-performance-benchmarks.sh` will convert the underline to dash when feeding the arguments to `benchmark_latency.py`. For example, the corresponding command line arguments for `benchmark_latency.py` will be `--model meta-llama/Meta-Llama-3-8B --tensor-parallel-size 1 --load-format dummy --num-iters-warmup 5 --num-iters 15`
61
+
62
+
- The `test_name` attributes is a unique identifier for the test. In `latency-tests.json`, it must start with `latency_`.
63
+
- The `parameters` attribute control the command line arguments to be used for `benchmark_latency.py`. Note that please use underline `_` instead of the dash `-` when specifying the command line arguments, and `run-performance-benchmarks.sh` will convert the underline to dash when feeding the arguments to `benchmark_latency.py`. For example, the corresponding command line arguments for `benchmark_latency.py` will be `--model meta-llama/Meta-Llama-3-8B --tensor-parallel-size 1 --load-format dummy --num-iters-warmup 5 --num-iters 15`
73
64
74
65
Note that the performance numbers are highly sensitive to the value of the parameters. Please make sure the parameters are set correctly.
75
66
76
67
WARNING: The benchmarking script will save json results by itself, so please do not configure `--output-json` parameter in the json file.
77
68
69
+
### Throughput test
78
70
79
-
#### Throughput test
80
71
The tests are specified in `throughput-tests.json`. The syntax is similar to `latency-tests.json`, except for that the parameters will be fed forward to `benchmark_throughput.py`.
81
72
82
73
The number of this test is also stable -- a slight change on the value of this number might vary the performance numbers by a lot.
83
74
84
-
#### Serving test
75
+
### Serving test
76
+
85
77
We test the throughput by using `benchmark_serving.py` with request rate = inf to cover the online serving overhead. The corresponding parameters are in `serving-tests.json`, and here is an example:
86
78
87
-
```
79
+
```json
88
80
[
89
81
{
90
82
"test_name": "serving_llama8B_tp1_sharegpt",
@@ -109,6 +101,7 @@ We test the throughput by using `benchmark_serving.py` with request rate = inf t
109
101
```
110
102
111
103
Inside this example:
104
+
112
105
- The `test_name` attribute is also a unique identifier for the test. It must start with `serving_`.
113
106
- The `server-parameters` includes the command line arguments for vLLM server.
114
107
- The `client-parameters` includes the command line arguments for `benchmark_serving.py`.
@@ -118,36 +111,33 @@ The number of this test is less stable compared to the delay and latency benchma
118
111
119
112
WARNING: The benchmarking script will save json results by itself, so please do not configure `--save-results` or other results-saving-related parameters in `serving-tests.json`.
120
113
121
-
#### Visualizing the results
114
+
### Visualizing the results
115
+
122
116
The `convert-results-json-to-markdown.py` helps you put the benchmarking results inside a markdown table, by formatting [descriptions.md](tests/descriptions.md) with real benchmarking results.
123
117
You can find the result presented as a table inside the `buildkite/performance-benchmark` job page.
124
118
If you do not see the table, please wait till the benchmark finish running.
125
119
The json version of the table (together with the json version of the benchmark) will be also attached to the markdown file.
126
120
The raw benchmarking results (in the format of json files) are in the `Artifacts` tab of the benchmarking.
127
121
128
-
129
-
130
122
## Nightly test details
131
123
132
124
See [nightly-descriptions.md](nightly-descriptions.md) for the detailed description on test workload, models and docker containers of benchmarking other llm engines.
133
125
126
+
### Workflow
134
127
135
-
#### Workflow
136
-
137
-
- The [nightly-pipeline.yaml](nightly-pipeline.yaml) specifies the docker containers for different LLM serving engines.
128
+
- The [nightly-pipeline.yaml](nightly-pipeline.yaml) specifies the docker containers for different LLM serving engines.
138
129
- Inside each container, we run [run-nightly-suite.sh](run-nightly-suite.sh), which will probe the serving engine of the current container.
139
130
- The `run-nightly-suite.sh` will redirect the request to `tests/run-[llm serving engine name]-nightly.sh`, which parses the workload described in [nightly-tests.json](tests/nightly-tests.json) and performs the benchmark.
140
131
- At last, we run [scripts/plot-nightly-results.py](scripts/plot-nightly-results.py) to collect and plot the final benchmarking results, and update the results to buildkite.
141
132
142
-
####Nightly tests
133
+
### Nightly tests
143
134
144
135
In [nightly-tests.json](tests/nightly-tests.json), we include the command line arguments for benchmarking commands, together with the benchmarking test cases. The format is highly similar to performance benchmark.
145
136
146
-
####Docker containers
137
+
### Docker containers
147
138
148
139
The docker containers for benchmarking are specified in `nightly-pipeline.yaml`.
149
140
150
141
WARNING: the docker versions are HARD-CODED and SHOULD BE ALIGNED WITH `nightly-descriptions.md`. The docker versions need to be hard-coded as there are several version-specific bug fixes inside `tests/run-[llm serving engine name]-nightly.sh`.
151
142
152
143
WARNING: populating `trt-llm` to latest version is not easy, as it requires updating several protobuf files in [tensorrt-demo](https://github.com/neuralmagic/tensorrt-demo.git).
0 commit comments