Skip to content

Commit 642fa86

Browse files
authored
[Benchmarks] Update benchmarks README.md (#18954)
Add instructions for building, contributing, and analyzing results
1 parent f86b606 commit 642fa86

File tree

1 file changed

+74
-20
lines changed

1 file changed

+74
-20
lines changed

devops/scripts/benchmarks/README.md

Lines changed: 74 additions & 20 deletions
Original file line numberDiff line numberDiff line change
@@ -11,38 +11,60 @@ Scripts for running performance tests on SYCL and Unified Runtime.
1111
- [Gromacs](https://gitlab.com/gromacs/gromacs.git)/[Grappa](https://github.com/graeter-group/grappa)
1212
- [BenchDNN](https://github.com/uxlfoundation/oneDNN/tree/main/tests/benchdnn)
1313

14-
## Running
14+
## Requirements
1515

16-
`$ ./main.py ~/benchmarks_workdir/ --sycl ~/llvm/build/ --ur ~/ur --adapter adapter_name`
16+
* Built compiler to be used for benchmarks.
17+
Instructions on where to find releases or how to build from sources can be found [here](https://github.com/intel/llvm).
1718

18-
This will download and build everything in `~/benchmarks_workdir/` using the compiler in `~/llvm/build/`, UR source from `~/ur` and then run the benchmarks for `adapter_name` adapter. The results will be stored in `benchmark_results.md`.
19+
* [Unified Runtime](https://github.com/intel/llvm/tree/sycl/unified-runtime) installed.
20+
Path to the UR install directory will be required in case of using UR for benchmarking.
1921

20-
The scripts will try to reuse the files stored in `~/benchmarks_workdir/`, but the benchmarks will be rebuilt every time. To avoid that, use `--no-rebuild` option.
22+
* `Python3` is required to install and run benchmarks.
2123

22-
## Running in CI
24+
## Building & Running
2325

24-
The benchmarks scripts are used in a GitHub Actions worflow, and can be automatically executed on a preconfigured system against any Pull Request.
26+
```bash
27+
$ git clone https://github.com/intel/llvm.git
28+
$ cd llvm/devops/scripts/benchmarks/
29+
$ pip install -r requirements.txt
2530

26-
![compute benchmarks](workflow.png "Compute Benchmarks CI job")
31+
$ ./main.py ~/benchmarks_workdir/ --sycl ~/llvm/build/ --ur ~/ur_install --adapter adapter_name
32+
```
2733

28-
To execute the benchmarks in CI, navigate to the `Actions` tab and then go to the `Compute Benchmarks` action. Here, you will find a list of previous runs and a "Run workflow" button. Upon clicking the button, you will be prompted to fill in a form to customize your benchmark run. The only mandatory field is the `PR number`, which is the identifier for the Pull Request against which you want the benchmarks to run.
34+
This last command will **download and build** everything in `~/benchmarks_workdir/`
35+
using the built compiler located in `~/llvm/build/`,
36+
UR **install directory** from `~/ur`,
37+
and then **run** the benchmarks for `adapter_name` adapter.
2938

30-
You can also include additional benchmark parameters, such as environment variables or filters. For a complete list of options, refer to `$ ./main.py --help`.
39+
>NOTE: By default `level_zero` adapter is used.
3140
32-
Once all the required information is entered, click the "Run workflow" button to initiate a new workflow run. This will execute the benchmarks and then post the results as a comment on the specified Pull Request.
41+
>NOTE: Pay attention to the `--ur` parameter. It points directly to the directory where UR is installed.
42+
To install Unified Runtime in the predefined location, use the `-DCMAKE_INSTALL_PREFIX`.
3343

34-
You must be a member of the `oneapi-src` organization to access these features.
44+
UR build example:
45+
```
46+
$ cmake -DCMAKE_BUILD_TYPE=Release -S~/llvm/unified-runtime -B~/ur_build -DCMAKE_INSTALL_PREFIX=~/ur_install -DUR_BUILD_ADAPTER_L0=ON -DUR_BUILD_ADAPTER_L0_V2=ON
47+
```
48+
49+
### Rebuild
50+
The scripts will try to reuse the files stored in `~/benchmarks_workdir/`, but the benchmarks will be rebuilt every time.
51+
To avoid that, use `--no-rebuild` option.
52+
53+
## Results
3554

36-
## Comparing results
55+
By default, the benchmark results are not stored.
56+
To store them, use the option `--save <name>`. This will make the results available for comparison during the next benchmark runs.
57+
To indicate a specific results location, use the option `--results-dir <path>`.
3758

38-
By default, the benchmark results are not stored. To store them, use the option `--save <name>`. This will make the results available for comparison during the next benchmark runs.
59+
### Comparing results
3960

4061
You can compare benchmark results using `--compare` option. The comparison will be presented in a markdown output file (see below). If you want to calculate the relative performance of the new results against the previously saved data, use `--compare <previously_saved_data>` (i.e. `--compare baseline`). In case of comparing only stored data without generating new results, use `--dry-run --compare <name1> --compare <name2> --relative-perf <name1>`, where `name1` indicates the baseline for the relative performance calculation and `--dry-run` prevents the script for running benchmarks. Listing more than two `--compare` options results in displaying only execution time, without statistical analysis.
4162

42-
Baseline_L0, as well as Baseline_L0v2 (for the level-zero adapter v2) is updated automatically during a nightly job. The results
63+
>NOTE: Baseline_L0, as well as Baseline_L0v2 (for the level-zero adapter v2) is updated automatically during a nightly job.
64+
The results
4365
are stored [here](https://oneapi-src.github.io/unified-runtime/performance/).
4466

45-
## Output formats
67+
### Output formats
4668
You can display the results in the form of a HTML file by using `--ouptut-html` and a markdown file by using `--output-markdown`. Due to character limits for posting PR comments, the final content of the markdown file might be reduced. In order to obtain the full markdown output, use `--output-markdown full`.
4769

4870
## Logging
@@ -66,13 +88,37 @@ You can also use the `--verbose` flag, which sets the log level to `debug` and o
6688
./main.py ~/benchmarks_workdir/ --sycl ~/llvm/build/ --verbose
6789
```
6890

69-
## Requirements
91+
## Additional options
92+
93+
In addition to the above parameters, there are also additional options that help run benchmarks and read the results in a more customized way.
94+
95+
`--preset <option>` - limits the types of benchmarks that are run.
96+
97+
The available benchmarks options are:
98+
* `Full` (Compute, Gromacs, llama, SYCL, Velocity and UMF benchmarks)
99+
* `SYCL` (Compute, llama, SYCL, Velocity)
100+
* `Minimal` (Compute)
101+
* `Normal` (Compute, Gromacs, llama, Velocity)
102+
* `Gromacs` (Gromacs)
103+
* `Test` (Test Suite)
70104

71-
### Python
105+
`--filter <regex>` - allows to set the regex pattern to filter benchmarks by name.
72106

73-
dataclasses-json==0.6.7
74-
PyYAML==6.0.2
75-
Mako==1.3.0
107+
For example `--filter "graph_api_*"`
108+
109+
## Running in CI
110+
111+
The benchmarks scripts are used in a GitHub Actions worflow, and can be automatically executed on a preconfigured system against any Pull Request.
112+
113+
![compute benchmarks](workflow.png "Compute Benchmarks CI job")
114+
115+
To execute the benchmarks in CI, navigate to the `Actions` tab and then go to the `Compute Benchmarks` action. Here, you will find a list of previous runs and a "Run workflow" button. Upon clicking the button, you will be prompted to fill in a form to customize your benchmark run. The only mandatory field is the `PR number`, which is the identifier for the Pull Request against which you want the benchmarks to run.
116+
117+
You can also include additional benchmark parameters, such as environment variables or filters. For a complete list of options, refer to `$ ./main.py --help`.
118+
119+
Once all the required information is entered, click the "Run workflow" button to initiate a new workflow run. This will execute the benchmarks and then post the results as a comment on the specified Pull Request.
120+
121+
>NOTE: You must be a member of the `oneapi-src` organization to access these features.
76122
77123
### System
78124

@@ -95,3 +141,11 @@ compute-runtime (Ubuntu):
95141
IGC (Ubuntu):
96142

97143
`$ sudo apt-get install flex bison libz-dev cmake libc6 libstdc++6 python3-pip`
144+
145+
146+
## Contribution
147+
148+
The requirements and instructions above are for building the project from source
149+
without any modifications. To make modifications to the framework, please see the
150+
[Contribution Guide](https://github.com/intel/llvm/blob/sycl/devops/scripts/benchmarks/CONTRIB.md)
151+
for more detailed instructions.

0 commit comments

Comments
 (0)