This suite consists of Loki benchmarks tests for multiple scenarios. Each scenario asserts recorded measurements against a selected profile from the config directory:
-
Write benchmarks:
- High Volume Writes: Measure
CPU,MEMandQPS,p99,p50avgrequest duration for all 2xx write requests to all Loki distributor and ingester pods.
- High Volume Writes: Measure
-
Read benchmarks:
- High Volume Reads: Measure
QPS,p99,p50andavgrequest duration for all 2xx read requests to all Loki query-frontend, querier and ingester pods. - High Volume Aggregate: Measure
QPS,p99,p50andavgrequest duration for all 2xx read requests to all Loki query-frontend, querier and ingester pods. - High Volume Aggregate: Measure
QPS,p99,p50andavgrequest duration for all 2xx read requests to all Loki query-frontend, querier and ingester pods. - Dashboard queries: Measure
QPS,p99,p50andavgrequest duration for all 2xx read requests to all Loki query-frontend, querier and ingester pods.
- High Volume Reads: Measure
- Software:
gnuplot
Note: Install on Linux environment, e.g. on Fedora using: sudo dnf install gnuplot
- Required software:
kubectl - Repositories:
- Observatorium
- Optional: Cadvisor
Note: Clone git repositories into sibling directories to the loki-benchmarks one.
Note: Cadvisor is only required if measuring CPU and memory of the container. In addition, change the value of the enableCadvisorMetrics key in the configuration to be true. It is false by default.
- Configure the parameters (
config/loki-parameters) and deploy Loki & configure Prometheus:make deploy-obs-loki - Run the benchmarks:
make bench-dev
- Required software:
oc,aws - Cluster Size:
m4.16xlarge
- Configure benchmark parameters
config/loki-parameters - Create S3 bucket:
make deploy-s3-bucket - Deploy prometheus
make deploy-ocp-prometheus - Download loki observatorium template locally
make download-obs-loki-template - Deploy Loki
make deploy-ocp-loki - Run the benchmarks:
make ocp-run-benchmarks
Note: For additional details and all-in-one commands use: make help
Upon benchmark execution completion, results are available in the reports/date+time folder.
Uninstall using: make ocp-all-cleanup.
- Declare a new scenario with expected measurement values for each profile in the config directory.
- Extend the golang
Scenariosstruct in internal/config/config.go with the new scenario. - Add a new
_test.gofile in the benchmarks directory. - When using
cluster-logging-load-clientas logger, thecommandconfiguration parameter is either generate or query and
all otherargsconfiguration parameters are described in https://github.com/ViaQ/cluster-logging-load-client - Overriding
urlandtenantrequires that the logger implementation provides such named CLI flags
$ make bench-dev
Example output:
Running Suite: Benchmarks Suite
===============================
Random Seed: 1597237201
Will run 1 of 1 specs
• [MEASUREMENT]
Scenario: High Volume Writes
/home/username/dev/loki-benchmarks/benchmarks/high_volume_writes_test.go:18
should result in measurements of p99, p50 and avg for all successful write requests to the distributor
/home/username/dev/loki-benchmarks/benchmarks/high_volume_writes_test.go:32
Ran 10 samples:
All distributor 2xx Writes p99:
Smallest: 0.087
Largest: 0.096
Average: 0.092 ± 0.003
All distributor 2xx Writes p50:
Smallest: 0.003
Largest: 0.003
Average: 0.003 ± 0.000
All distributor 2xx Writes avg:
Smallest: 0.370
Largest: 0.594
Average: 0.498 ± 0.085
------------------------------
On each run a new time-based report directory is created under the reports directory. Each report includes:
- Summary
README.mdwith all benchmark measurements. - A CSV file for each specific measurement.
- A GNUPlot file for each specific measurement to transform the data into a PNG graph.
Example output:
reports
├── 2020-08-12-10-33-31
├── All-distributor-2xx-Writes-avg.csv
├── All-distributor-2xx-Writes-avg.gnuplot
├── All-distributor-2xx-Writes-avg.gnuplot.png
├── All-distributor-2xx-Writes-p50.csv
├── All-distributor-2xx-Writes-p50.gnuplot
├── All-distributor-2xx-Writes-p50.gnuplot.png
├── All-distributor-2xx-Writes-p99.csv
├── All-distributor-2xx-Writes-p99.gnuplot
├── All-distributor-2xx-Writes-p99.gnuplot.png
├── junit.xml
└── README.md
During benchmark execution, use hack/ocp-deploy-grafana.sh to deploy grafna and connect to Loki as a datasource:
- Use a web browser to access grafana UI. The URL, username and password are printed by the script
- In the UI, under settings -> data-sources hit
Save & testto verify that Loki data-source is connected and that there are no errors - In explore tab change the data-source to
Lokiand use{client="promtail"}query to visualize log lines - Use additional queries such as
rate({client="promtail"}[1m])to verify the behaviour of Loki and the benchmark