Skip to content

Commit fe172cb

Browse files
authored
Merge pull request #3 from LambdaLabsML/eole/benchmarking
Log inference time and reserved GPU memory
2 parents c556c85 + 2b5da79 commit fe172cb

16 files changed

+545
-8
lines changed

.gitignore

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,6 @@
11
model_zoo/
22
outputs/
3+
*benchmark_tmp.csv
34

45
# Byte-compiled / optimized / DLL files
56
__pycache__/
@@ -130,6 +131,7 @@ venv/
130131
ENV/
131132
env.bak/
132133
venv.bak/
134+
.venv*/
133135

134136
# Spyder project settings
135137
.spyderproject
@@ -160,4 +162,4 @@ cython_debug/
160162
# be found at https://github.com/github/gitignore/blob/main/Global/JetBrains.gitignore
161163
# and can be added to the global gitignore or merged into this file. For a more nuclear
162164
# option (not recommended) you can uncomment the following to ignore the entire idea folder.
163-
#.idea/
165+
#.idea/

.vscode/settings.json

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,3 @@
1+
{
2+
"python.formatting.provider": "black"
3+
}

README.md

Lines changed: 26 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -98,6 +98,32 @@ for idx, im in enumerate(images):
9898
im.save(f"{idx:06}.png")
9999
```
100100

101+
## Benchmarking inference
102+
103+
Detailed benchmark documentation can be found [here](./docs/benchmark.md).
104+
105+
### Setup
106+
107+
Before running the benchmark, make sure you have completed the repository [installation steps](#installation).
108+
109+
You will then need to set the huggingface access token:
110+
1. Create a user account on HuggingFace and generate an access token.
111+
2. Set your huggingface access token as the `ACCESS_TOKEN` environment variable:
112+
```
113+
export ACCESS_TOKEN=<hf_...>
114+
```
115+
116+
### Usage
117+
118+
Launch the benchmark script to append benchmark results to the existing [benchmark.csv](./benchmark.csv) results file:
119+
```
120+
python ./scripts/benchmark.py
121+
```
122+
123+
### Results
124+
125+
<img src="./docs/pictures/pretty_benchmark_sd_txt2img_latency.png" alt="Stable Diffusion Text2Image Latency (seconds)" width="850"/>
126+
101127
## Links
102128

103129
- [Captioned Pokémon dataset](https://huggingface.co/datasets/lambdalabs/pokemon-blip-captions)

benchmark.csv

Lines changed: 58 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,58 @@
1+
Intel(R) Core(TM) i7-6850K CPU @ 3.60GHz,single,pytorch,1,458.97,0.0
2+
Intel(R) Core(TM) i7-6850K CPU @ 3.60GHz,single,onnx,1,286.13,0.0
3+
NVIDIA GeForce RTX 3090,single,pytorch,1,7.96,7.72
4+
NVIDIA GeForce RTX 3090,half,pytorch,1,4.83,4.54
5+
NVIDIA GeForce RTX 3090,single,pytorch,2,14.49,11
6+
NVIDIA GeForce RTX 3090,half,pytorch,2,8.42,8.75
7+
NVIDIA GeForce RTX 3090,single,pytorch,4,27.94,17.69
8+
NVIDIA GeForce RTX 3090,half,pytorch,4,15.87,15.36
9+
NVIDIA GeForce RTX 3090,single,pytorch,8,-1.0,-1.0
10+
NVIDIA GeForce RTX 3090,half,pytorch,8,-1.0,-1.0
11+
NVIDIA RTX A5500,single,pytorch,1,8.55,7.69
12+
NVIDIA RTX A5500,half,pytorch,1,5.05,4.58
13+
NVIDIA RTX A5500,single,pytorch,2,15.71,11
14+
NVIDIA RTX A5500,half,pytorch,2,9.37,8.8
15+
NVIDIA RTX A5500,single,pytorch,4,30.51,17.69
16+
NVIDIA RTX A5500,half,pytorch,4,16.97,15.33
17+
NVIDIA RTX A5500,single,pytorch,8,-1.0,-1.0
18+
NVIDIA RTX A5500,half,pytorch,8,-1.0,-1.0
19+
AMD EPYC 7352 24-Core Processor,single,pytorch,1,529.93,0.0
20+
AMD EPYC 7352 24-Core Processor,single,onnx,1,223.19,0.0
21+
NVIDIA GeForce RTX 3080,single,pytorch,4,-1.0,-1.0
22+
NVIDIA GeForce RTX 3080,half,pytorch,4,-1.0,-1.0
23+
NVIDIA GeForce RTX 3080,single,pytorch,1,-1.0,-1.0
24+
NVIDIA GeForce RTX 3080,half,pytorch,1,5.59,4.52
25+
NVIDIA GeForce RTX 3080,single,pytorch,2,-1.0,-1.0
26+
NVIDIA GeForce RTX 3080,half,pytorch,2,-1.0,-1.0
27+
NVIDIA A100 80GB PCIe,single,pytorch,1,6.39,7.75
28+
NVIDIA A100 80GB PCIe,half,pytorch,1,3.74,4.55
29+
NVIDIA A100 80GB PCIe,single,pytorch,2,11.12,11.05
30+
NVIDIA A100 80GB PCIe,half,pytorch,2,5.72,8.77
31+
NVIDIA A100 80GB PCIe,single,pytorch,4,20.18,17.63
32+
NVIDIA A100 80GB PCIe,half,pytorch,4,10.04,15.34
33+
NVIDIA A100 80GB PCIe,single,pytorch,8,38.88,30.88
34+
NVIDIA A100 80GB PCIe,half,pytorch,8,18.68,28.47
35+
NVIDIA A100 80GB PCIe,single,pytorch,16,76.92,57.46
36+
NVIDIA A100 80GB PCIe,half,pytorch,16,36.67,54.73
37+
NVIDIA A100 80GB PCIe,half,pytorch,28,63.88,78.78
38+
NVIDIA RTX A6000,single,pytorch,1,8.09,7.75
39+
NVIDIA RTX A6000,half,pytorch,1,5.03,4.53
40+
NVIDIA RTX A6000,single,pytorch,2,14.86,10.98
41+
NVIDIA RTX A6000,half,pytorch,2,9.03,8.79
42+
NVIDIA RTX A6000,single,pytorch,4,27.92,17.62
43+
NVIDIA RTX A6000,half,pytorch,4,17.0,15.34
44+
NVIDIA RTX A6000,single,pytorch,8,53.95,30.88
45+
NVIDIA RTX A6000,half,pytorch,8,32.57,28.51
46+
NVIDIA RTX A6000,half,pytorch,16,63.16,46.11
47+
Quadro RTX 8000,single,pytorch,1,12.3,7.71
48+
Quadro RTX 8000,half,pytorch,1,5.93,4.52
49+
Quadro RTX 8000,single,pytorch,2,24.42,9.16
50+
Quadro RTX 8000,half,pytorch,2,10.92,7.02
51+
Quadro RTX 8000,single,pytorch,4,42.56,15.58
52+
Quadro RTX 8000,half,pytorch,4,21.24,12.39
53+
Quadro RTX 8000,single,pytorch,8,76.96,23.11
54+
Quadro RTX 8000,half,pytorch,8,40.52,20.98
55+
Quadro RTX 8000,single,pytorch,16,152.55,42.47
56+
Quadro RTX 8000,half,pytorch,16,80.31,38.18
57+
Quadro RTX 8000,single,pytorch,32,-1.0,-1.0
58+
Quadro RTX 8000,half,pytorch,32,-1.0,-1.0

docs/benchmark.md

Lines changed: 112 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,112 @@
1+
# Benchmarking Diffuser Models
2+
3+
We present a benchmark of [Stable Diffusion](https://huggingface.co/CompVis/stable-diffusion) model inference. This text2image model uses a text prompt as input and outputs an image of resolution `512x512`.
4+
5+
Our experiments analyze inference performance in terms of speed, memory consumption, throughput, and quality of the output images. We look at how different choices in hardware (GPU model, GPU vs CPU) and software (single vs half precision, pytorch vs onnxruntime) affect inference performance.
6+
7+
For reference, we will be providing benchmark results for the following GPU devices: A100 80GB PCIe, RTX3090, RTXA5500, RTXA6000, RTX3080, RTX8000. Please refer to the ["Reproducing the experiments"](#reproducing-the-experiments) section for details on running these experiments in your own environment.
8+
9+
10+
## Inference speed
11+
12+
The figure below shows the latency at inference when using different hardware and precision for generating a single image using the (arbitrary) text prompt: *"a photo of an astronaut riding a horse on mars"*.
13+
14+
<img src="./pictures/pretty_benchmark_sd_txt2img_latency.png" alt="Stable Diffusion Text2Image Latency (seconds)" width="800"/>
15+
16+
17+
We find that:
18+
* The inference latencies range between `3.74` to `5.56` seconds across our tested Ampere GPUs, including the consumer 3080 card to the flagship A100 80GB card.
19+
* Half-precision reduces the latency by about `40%` for Ampere GPUs, and by `52%` for the previous generation `RTX8000` GPU.
20+
21+
We believe Ampere GPUs enjoy a relatively "smaller" speedup from half-precision due to their use of `TF32`. For readers who are not familiar with `TF32`, it is a [`19-bit` format](https://blogs.nvidia.com/blog/2020/05/14/tensorfloat-32-precision-format/) that has been used as the default single-precision data type on Ampere GPUs for major deep learning frameworks such as PyTorch and TensorFlow. One can expect half-precision's speedup over `FP32` to be bigger since it is a true `32-bit` format.
22+
23+
24+
We run these same inference jobs CPU devices to put in perspective the inference speed performance observed on GPU.
25+
26+
<img src="./pictures/pretty_benchmark_sd_txt2img_gpu_vs_cpu.png" alt="Stable Diffusion Text2Image GPU v CPU" width="700"/>
27+
28+
29+
We note that:
30+
* GPUs are significantly faster -- by one or two orders of magnitudes depending on the precisions.
31+
* `onnxruntime` can reduce the latency for CPU by about `40%` to `50%`, depending on the type of CPUs.
32+
33+
ONNX currently does not have [stable support](https://github.com/huggingface/diffusers/issues/489) for Huggingface diffusers.
34+
We will investigate `onnxruntime-gpu` in future benchmarks.
35+
36+
37+
38+
39+
## Memory
40+
41+
We also measure the memory consumption of running stable diffusion inference.
42+
43+
<img src="./pictures/pretty_benchmark_sd_txt2img_mem.png" alt="Stable Diffusion Text2Image Memory (GB)" width="640"/>
44+
45+
Memory usage is observed to be consistent across all tested GPUs:
46+
* It takes about `7.7 GB` GPU memory to run single-precision inference with batch size one.
47+
* It takes about `4.5 GB` GPU memory to run half-precision inference with batch size one.
48+
49+
50+
51+
52+
## Throughput
53+
54+
Latency measures how quickly a _single_ input can be processed, which is critical to online applications that don't tolerate even the slightest delay. However, some (offline) applications may focus on "throughput", which measures the total volume of data processed in a fixed amount of time.
55+
56+
57+
Our throughput benchmark pushes the batch size to the maximum for each GPU, and measures the number of images they can process per minute. The reason for maximizing the batch size is to keep tensor cores busy so that computation can dominate the workload, avoiding any non-computational bottlenecks.
58+
59+
We run a series of throughput experiment in pytorch with half-precision and using the maximum batch size that can be used for each GPU:
60+
61+
<img src="./pictures/pretty_benchmark_sd_txt2img_throughput.png" alt="Stable Diffusion Text2Image Throughput (images/minute)" width="390"/>
62+
63+
We note:
64+
* Once again, A100 80GB is the top performer and has the highest throughput.
65+
* The gap between A100 80GB and other cards in terms of throughput can be explained by the larger maximum batch size that can be used on this card.
66+
67+
68+
As a concrete example, the chart below shows how A100 80GB's throughput increases by `64%` when we changed the batch size from 1 to 28 (the largest without causing an out of memory error). It is also interesting to see that the increase is not linear and flattens out when batch size reaches a certain value, at which point the tensor cores on the GPU are saturated and any new data in the GPU memory will have to be queued up before getting their own computing resources.
69+
70+
<img src="./pictures/pretty_benchmark_sd_txt2img_batchsize_vs_throughput.png" alt="Stable Diffusion Text2Image Batch size vs Throughput (images/minute)" width="380"/>
71+
72+
73+
## Precision
74+
75+
We are curious about whether half-precision introduces degradations to the quality of the output images. To test this out, we fixed the text prompt as well as the "latent" input vector and fed them to the single-precision model and the half-precision model. We ran the inference for 100 steps and saved both models' outputs at each step, as well as the difference map:
76+
77+
![Evolution of precision v degradation across 100 steps](./pictures/benchmark_sd_precision_history.gif)
78+
79+
Our observation is that there are indeed visible differences between the single-precision output and the half-precision output, especially in the early steps. The differences often decrease with the number of steps, but might not always vanish.
80+
81+
Interestingly, such a difference may not imply artifacts in half-precision's outputs. For example, in step 70, the picture below shows half-precision didn't produce the artifact in the single-precision output (an extra front leg):
82+
83+
![Precision v Degradation at step 70](./pictures/benchmark_sd_precision_step_70.png)
84+
85+
---
86+
87+
## Reproducing the experiments
88+
89+
You can use this [Lambda Diffusers](https://github.com/LambdaLabsML/lambda-diffusers) repository to reproduce the results presented in this article.
90+
91+
## Setup
92+
93+
Before running the benchmark, make sure you have completed the repository [installation steps](../README.md#installation).
94+
95+
You will then need to set the huggingface access token:
96+
1. Create a user account on HuggingFace and generate an access token.
97+
2. Set your huggingface access token as the `ACCESS_TOKEN` environment variable:
98+
```
99+
export ACCESS_TOKEN=<hf_...>
100+
```
101+
102+
## Usage
103+
104+
Launch the `benchmark.py` script to append benchmark results to the existing [benchmark.csv](../benchmark.csv) results file:
105+
```
106+
python ./scripts/benchmark.py
107+
```
108+
109+
Lauch the `benchmark_quality.py` script to compare the output of single-precision and half-precision models:
110+
```
111+
python ./scripts/benchmark_quality.py
112+
```

docs/pictures/FreeMono.ttf

336 KB
Binary file not shown.
8.72 MB
Loading
1.05 MB
Loading
Loading
Loading

0 commit comments

Comments
 (0)