You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Instructions on where to find releases or how to build from sources can be found [here](https://github.com/intel/llvm).
17
18
18
-
This will download and build everything in `~/benchmarks_workdir/` using the compiler in `~/llvm/build/`, UR source from `~/ur` and then run the benchmarks for `adapter_name` adapter. The results will be stored in `benchmark_results.md`.
Path to the UR install directory will be required in case of using UR for benchmarking.
19
21
20
-
The scripts will try to reuse the files stored in `~/benchmarks_workdir/`, but the benchmarks will be rebuilt every time. To avoid that, use `--no-rebuild` option.
22
+
*`Python3` is required to install and run benchmarks.
21
23
22
-
## Running in CI
24
+
## Building & Running
23
25
24
-
The benchmarks scripts are used in a GitHub Actions worflow, and can be automatically executed on a preconfigured system against any Pull Request.
26
+
```bash
27
+
$ git clone https://github.com/intel/llvm.git
28
+
$ cd llvm/devops/scripts/benchmarks/
29
+
$ pip install -r requirements.txt
25
30
26
-

To execute the benchmarks in CI, navigate to the `Actions` tab and then go to the `Compute Benchmarks` action. Here, you will find a list of previous runs and a "Run workflow" button. Upon clicking the button, you will be prompted to fill in a form to customize your benchmark run. The only mandatory field is the `PR number`, which is the identifier for the Pull Request against which you want the benchmarks to run.
34
+
This last command will **download and build** everything in `~/benchmarks_workdir/`
35
+
using the built compiler located in `~/llvm/build/`,
36
+
UR **install directory** from `~/ur`,
37
+
and then **run** the benchmarks for `adapter_name` adapter.
29
38
30
-
You can also include additional benchmark parameters, such as environment variables or filters. For a complete list of options, refer to `$ ./main.py --help`.
39
+
>NOTE: By default `level_zero` adapter is used.
31
40
32
-
Once all the required information is entered, click the "Run workflow" button to initiate a new workflow run. This will execute the benchmarks and then post the results as a comment on the specified Pull Request.
41
+
>NOTE: Pay attention to the `--ur` parameter. It points directly to the directory where UR is installed.
42
+
To install Unified Runtime in the predefined location, use the `-DCMAKE_INSTALL_PREFIX`.
33
43
34
-
You must be a member of the `oneapi-src` organization to access these features.
The scripts will try to reuse the files stored in `~/benchmarks_workdir/`, but the benchmarks will be rebuilt every time.
51
+
To avoid that, use `--no-rebuild` option.
52
+
53
+
## Results
35
54
36
-
## Comparing results
55
+
By default, the benchmark results are not stored.
56
+
To store them, use the option `--save <name>`. This will make the results available for comparison during the next benchmark runs.
57
+
To indicate a specific results location, use the option `--results-dir <path>`.
37
58
38
-
By default, the benchmark results are not stored. To store them, use the option `--save <name>`. This will make the results available for comparison during the next benchmark runs.
59
+
### Comparing results
39
60
40
61
You can compare benchmark results using `--compare` option. The comparison will be presented in a markdown output file (see below). If you want to calculate the relative performance of the new results against the previously saved data, use `--compare <previously_saved_data>` (i.e. `--compare baseline`). In case of comparing only stored data without generating new results, use `--dry-run --compare <name1> --compare <name2> --relative-perf <name1>`, where `name1` indicates the baseline for the relative performance calculation and `--dry-run` prevents the script for running benchmarks. Listing more than two `--compare` options results in displaying only execution time, without statistical analysis.
41
62
42
-
Baseline_L0, as well as Baseline_L0v2 (for the level-zero adapter v2) is updated automatically during a nightly job. The results
63
+
>NOTE: Baseline_L0, as well as Baseline_L0v2 (for the level-zero adapter v2) is updated automatically during a nightly job.
64
+
The results
43
65
are stored [here](https://oneapi-src.github.io/unified-runtime/performance/).
44
66
45
-
## Output formats
67
+
###Output formats
46
68
You can display the results in the form of a HTML file by using `--ouptut-html` and a markdown file by using `--output-markdown`. Due to character limits for posting PR comments, the final content of the markdown file might be reduced. In order to obtain the full markdown output, use `--output-markdown full`.
47
69
48
70
## Logging
@@ -66,13 +88,37 @@ You can also use the `--verbose` flag, which sets the log level to `debug` and o
In addition to the above parameters, there are also additional options that help run benchmarks and read the results in a more customized way.
94
+
95
+
`--preset <option>` - limits the types of benchmarks that are run.
96
+
97
+
The available benchmarks options are:
98
+
*`Full` (Compute, Gromacs, llama, SYCL, Velocity and UMF benchmarks)
99
+
*`SYCL` (Compute, llama, SYCL, Velocity)
100
+
*`Minimal` (Compute)
101
+
*`Normal` (Compute, Gromacs, llama, Velocity)
102
+
*`Gromacs` (Gromacs)
103
+
*`Test` (Test Suite)
70
104
71
-
### Python
105
+
`--filter <regex>` - allows to set the regex pattern to filter benchmarks by name.
72
106
73
-
dataclasses-json==0.6.7
74
-
PyYAML==6.0.2
75
-
Mako==1.3.0
107
+
For example `--filter "graph_api_*"`
108
+
109
+
## Running in CI
110
+
111
+
The benchmarks scripts are used in a GitHub Actions worflow, and can be automatically executed on a preconfigured system against any Pull Request.
112
+
113
+

114
+
115
+
To execute the benchmarks in CI, navigate to the `Actions` tab and then go to the `Compute Benchmarks` action. Here, you will find a list of previous runs and a "Run workflow" button. Upon clicking the button, you will be prompted to fill in a form to customize your benchmark run. The only mandatory field is the `PR number`, which is the identifier for the Pull Request against which you want the benchmarks to run.
116
+
117
+
You can also include additional benchmark parameters, such as environment variables or filters. For a complete list of options, refer to `$ ./main.py --help`.
118
+
119
+
Once all the required information is entered, click the "Run workflow" button to initiate a new workflow run. This will execute the benchmarks and then post the results as a comment on the specified Pull Request.
120
+
121
+
>NOTE: You must be a member of the `oneapi-src` organization to access these features.
0 commit comments