Skip to content

Fix gradle build task #263

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Draft
wants to merge 11 commits into
base: main
Choose a base branch
from
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
66 changes: 66 additions & 0 deletions benchmark/Dockerfile
Original file line number Diff line number Diff line change
@@ -0,0 +1,66 @@
# Dacapo download
FROM debian:bookworm-slim as dacapo
RUN apt-get update \
&& apt-get -y install wget unzip libc6 \
&& apt-get -y clean \
&& rm -rf /var/lib/apt/lists/*

ARG DACAPO_VERSION=23.11-chopin
# The data for the big benchmarks is removed too ensure the final docker image is not too big
RUN wget -nv -O dacapo.zip https://download.dacapobench.org/chopin/dacapo-$DACAPO_VERSION.zip \
&& mkdir /dacapo \
&& unzip dacapo.zip -d /dacapo/ \
&& rm -rf /dacapo/dacapo-$DACAPO_VERSION/dat/luindex \
&& rm -rf /dacapo/dacapo-$DACAPO_VERSION/dat/lusearch \
&& rm -rf /dacapo/dacapo-$DACAPO_VERSION/dat/graphchi \
&& rm dacapo.zip

# Download and install renaissance benchmark
ARG RENAISSANCE_VERSION=0.16.0
RUN mkdir /renaissance \
&& wget -nv -O ./renaissance/renaissance-gpl.jar https://github.com/renaissance-benchmarks/renaissance/releases/download/v${RENAISSANCE_VERSION}/renaissance-gpl-${RENAISSANCE_VERSION}.jar \

FROM debian:bookworm-slim

RUN apt-get update \
&& apt-get -y install git curl wget procps gettext-base \
&& apt-get -y clean \
&& rm -rf /var/lib/apt/lists/*

COPY --from=eclipse-temurin:8-jammy /opt/java/openjdk /usr/lib/jvm/8
COPY --from=eclipse-temurin:11-jammy /opt/java/openjdk /usr/lib/jvm/11
COPY --from=eclipse-temurin:17-jammy /opt/java/openjdk /usr/lib/jvm/17

RUN rm -rf \
/usr/lib/jvm/*/man \
/usr/lib/jvm/*/src.zip \
/usr/lib/jvm/*/lib/src.zip \
/usr/lib/jvm/*/demo \
/usr/lib/jvm/*/sample

ENV JAVA_8_HOME=/usr/lib/jvm/8
ENV JAVA_11_HOME=/usr/lib/jvm/11
ENV JAVA_17_HOME=/usr/lib/jvm/17
ENV JAVA_HOME=${JAVA_8_HOME}
ENV PATH=${PATH}:${JAVA_HOME}/bin

ARG SIRUN_VERSION=0.1.11
RUN wget -O sirun.tar.gz https://github.com/DataDog/sirun/releases/download/v$SIRUN_VERSION/sirun-v$SIRUN_VERSION-x86_64-unknown-linux-musl.tar.gz \
&& tar -xzf sirun.tar.gz \
&& rm sirun.tar.gz \
&& mv sirun /usr/bin/sirun

ARG K6_VERSION=0.45.1
RUN wget -O k6.tar.gz https://github.com/grafana/k6/releases/download/v$K6_VERSION/k6-v$K6_VERSION-linux-amd64.tar.gz \
&& tar --strip-components=1 -xzf k6.tar.gz \
&& rm k6.tar.gz \
&& mv k6 /usr/bin/k6

RUN mkdir -p /app

COPY --from=dacapo /dacapo/ /app/
ARG DACAPO_VERSION=23.11-chopin
ENV DACAPO=/app/dacapo-$DACAPO_VERSION.jar

COPY --from=renaissance /renaissance/ /app/
ENV RENAISSANCE=/app/renaissance.jar
29 changes: 29 additions & 0 deletions benchmark/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,29 @@
# Benchmarks

This directory contains different types of benchmarks.

## Running Benchmarks via Docker

Docker allows the execution of benchmarks without needing to install and configure your development environment. For example, package installation and installation of sirun are performed automatically.

In order to run benchmarks using Docker, issue the following command from the `benchmark/` folder of this project:

```sh
./run.sh
```

If you run into storage errors (e.g. running out of disk space), try removing all unused Docker containers, networks, and images with `docker system prune -af` before running the script again. Once finished, the reports will be available in the `benchmark/reports/` folder. Note that the script can take ~40 minutes to run.

### Running specific benchmarks

If you want to run only a specific category of benchmarks, you can do so via arguments:

1. Run startup benchmarks
```sh
./run.sh startup [application]?
```

2. Run load benchmarks
```sh
./run.sh load [application]?
```
48 changes: 48 additions & 0 deletions benchmark/benchmarks.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,48 @@
#!/usr/bin/env bash
set -eu

echo "Running benchmarks ..."

readonly SCRIPT_DIR=$(cd -- "$(dirname -- "${BASH_SOURCE[0]}")" &>/dev/null && pwd)
export PROFILER_DIR="${SCRIPT_DIR}/.."
export REPORTS_DIR="${SCRIPT_DIR}/reports"
export UTILS_DIR="${SCRIPT_DIR}/utils"
export SHELL_UTILS_DIR="${UTILS_DIR}/shell"
export K6_UTILS_DIR="${UTILS_DIR}/k6"
export PROFILER="${SCRIPT_DIR}/profiler/libjavaProfiler.so"

run_benchmarks() {
local type=$1
if [[ -d "${type}" ]] && [[ -f "${type}/run.sh" ]]; then
cd "${type}"
./run.sh "$@"
cd "${SCRIPT_DIR}"
fi
}

# Find or rebuild profiler to be used in the benchmarks
if [[ ! -f "${PROFILER}" ]]; then
mkdir -p "${SCRIPT_DIR}/profiler"
cd "${PROFILER_DIR}"
ARCH=$(uname -p)
if [ $ARCH eq "x86_64" ]; then
ARCH="x64"
fi

readonly PROFILER_VERSION=$(./gradlew properties -q | grep "version:" | awk '{print $2}')
readonly PROFILER_COMPILED="${SCRIPT_DIR}/../ddprof-lib/build/lib/main/release/linux/${ARCH}/libjavaProfiler.so"
if [[ ! -f "${PROFILER_COMPILED}" ]]; then
echo "Profiler not found, starting gradle compile ..."
./gradlew assemble
fi
cp "${PROFILER_COMPILED}" "${PROFILER}"
cd "${SCRIPT_DIR}"
fi

if [[ "$#" == '0' ]]; then
for type in 'dacapo'; do
run_benchmarks "$type"
done
else
run_benchmarks "$@"
fi
21 changes: 21 additions & 0 deletions benchmark/dacapo/benchmark.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,21 @@
{
"name": "dacapo_${BENCHMARK}",
"setup": "bash -c \"mkdir -p ${OUTPUT_DIR}/${VARIANT}\"",
"run": "bash -c \"java ${JAVA_OPTS} -jar ${DACAPO} --converge --scratch-directory=${OUTPUT_DIR}/${VARIANT}/scratch --latency-csv ${BENCHMARK} &> ${OUTPUT_DIR}/${VARIANT}/dacapo.log\"",
"timeout": 150,
"iterations": 1,
"variants": {
"baseline": {
"env": {
"VARIANT": "baseline",
"JAVA_OPTS": ""
}
},
"profiling": {
"env": {
"VARIANT": "profiling",
"JAVA_OPTS": "-agentpath:${PROFILER}=start,wall=10ms,file=${OUTPUT_DIR}/${VARIANT}/profiler.jfr"
}
}
}
}
41 changes: 41 additions & 0 deletions benchmark/dacapo/run.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,41 @@
#!/usr/bin/env bash
set -eu

source "${UTILS_DIR}/update-java-version.sh" 11

function message() {
echo "$(date +"%T"): $1"
}

run_benchmark() {
local type=$1

message "dacapo benchmark: ${type} started"

# export the benchmark
export BENCHMARK="${type}"

# create output folder for the test
export OUTPUT_DIR="${REPORTS_DIR}/dacapo/${type}"
mkdir -p "${OUTPUT_DIR}"

# substitute environment variables in the json file
benchmark=$(mktemp)
# shellcheck disable=SC2046
# shellcheck disable=SC2016
envsubst "$(printf '${%s} ' $(env | cut -d'=' -f1))" <benchmark.json >"${benchmark}"

# run the sirun test
sirun "${benchmark}" &>"${OUTPUT_DIR}/${type}.json"

message "dacapo benchmark: ${type} finished"
}

if [ "$#" == '2' ]; then
run_benchmark "$2"
else
for benchmark in biojava tomcat ; do
run_benchmark "${benchmark}"
done
fi

21 changes: 21 additions & 0 deletions benchmark/renaissance/benchmark.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,21 @@
{
"name": "renassance_${BENCHMARK}",
"setup": "bash -c \"mkdir -p ${OUTPUT_DIR}/${VARIANT}\"",
"run": "bash -c \"java ${JAVA_OPTS} -jar ${RENASSANCE} --converge --scratch-base=${OUTPUT_DIR}/${VARIANT}/scratch --json ${OUTPUT_DIR}/${VARIANT}/${BENCHMARK}.json ${BENCHMARK} \"",
"timeout": 150,
"iterations": 1,
"variants": {
"baseline": {
"env": {
"VARIANT": "baseline",
"JAVA_OPTS": ""
}
},
"profiling": {
"env": {
"VARIANT": "profiling",
"JAVA_OPTS": "-agentpath:${PROFILER}=start,wall=10ms,file=${OUTPUT_DIR}/${VARIANT}/profiler.jfr"
}
}
}
}
41 changes: 41 additions & 0 deletions benchmark/renaissance/run.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,41 @@
#!/usr/bin/env bash
set -eu

source "${UTILS_DIR}/update-java-version.sh" 11

function message() {
echo "$(date +"%T"): $1"
}

run_benchmark() {
local type=$1

message "renaissance benchmark: ${type} started"

# export the benchmark
export BENCHMARK="${type}"

# create output folder for the test
export OUTPUT_DIR="${REPORTS_DIR}/renaissance/${type}"
mkdir -p "${OUTPUT_DIR}"

# substitute environment variables in the json file
benchmark=$(mktemp)
# shellcheck disable=SC2046
# shellcheck disable=SC2016
envsubst "$(printf '${%s} ' $(env | cut -d'=' -f1))" <benchmark.json >"${benchmark}"

# run the sirun test
sirun "${benchmark}" &>"${OUTPUT_DIR}/${type}.json"

message "renaissance benchmark: ${type} finished"
}

if [ "$#" == '2' ]; then
run_benchmark "$2"
else
for benchmark in akka-uct neo4j-analytics ; do
run_benchmark "${benchmark}"
done
fi

52 changes: 52 additions & 0 deletions benchmark/run.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,52 @@
#!/usr/bin/env bash
set -eu

readonly SCRIPT_DIR="$(cd -- "$(dirname -- "${BASH_SOURCE[0]}")" &>/dev/null && pwd)"
readonly INITIAL_DIR="$(pwd)"
export PROFILER="${SCRIPT_DIR}/profiler/libjavaProfiler.so"

cd "${SCRIPT_DIR}"

# Build container image
echo "Building base image ..."
docker build \
-t java-profiler/benchmark \
.

# Find or rebuild profiler to be used in the benchmarks
if [[ ! -f "${PROFILER}" ]]; then
mkdir -p "${SCRIPT_DIR}/profiler"
cd "${SCRIPT_DIR}/.."

ARCH=$(uname -p)
if [[ "$ARCH" == "x86_64" ]];
then
ARCH="x64"
elif [[ "$ARCH" == "aarch64" ]];
then
ARCH="arm64"
fi

readonly PROFILER_VERSION=$(./gradlew properties -q | grep "version:" | awk '{print $2}')
readonly PROFILER_COMPILED="${SCRIPT_DIR}/../ddprof-lib/build/lib/main/release/linux/${ARCH}/libjavaProfiler.so"
if [ ! -f "${PROFILER_COMPILED}" ]; then
echo "Profiler not found, starting gradle compile ..."
./gradlew assemble
fi
cp "${PROFILER_COMPILED}" "${PROFILER}"
cd "${SCRIPT_DIR}"
fi

# Trigger benchmarks
docker run --rm \
-v "${HOME}/.gradle":/home/benchmark/.gradle:delegated \
-v "${PWD}/..":/profiler:delegated \
-w /profiler/benchmark \
-e GRADLE_OPTS="-Dorg.gradle.daemon=false" \
--entrypoint=./benchmarks.sh \
--name java-profiler-benchmark \
--cap-add SYS_ADMIN \
java-profiler/benchmark \
"$@"

cd "${INITIAL_DIR}"
21 changes: 21 additions & 0 deletions benchmark/utils/k6.js
Original file line number Diff line number Diff line change
@@ -0,0 +1,21 @@
import {check} from 'k6';

export function checkResponse(response) {
const checks = Array.prototype.slice.call(arguments, 1);
const reduced = checks.reduce((result, current) => Object.assign(result, current), {});
check(response, reduced);
}

export const isOk = {
'is OK': r => r.status === 200
};

export const isRedirect = {
'is redirect': r => r.status >= 300 && r.status < 400
};

export function bodyContains(text) {
return {
'body contains': r => r.body.includes(text)
}
}
21 changes: 21 additions & 0 deletions benchmark/utils/run-k6-load-test.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,21 @@
#!/usr/bin/env bash
set -eu

command=$1
exit_code=0

cleanup() {
# run the exit command
bash -c "${command}"
exit $exit_code
}

trap cleanup EXIT ERR INT TERM

echo "Starting k6 load test, logs are recorded into ${LOGS_DIR}/k6.log..."

# run the k6 benchmark and store the result as JSON
k6 run k6.js --out "json=${OUTPUT_DIR}/k6_$(date +%s).json" > "${LOGS_DIR}/k6.log" 2>&1
exit_code=$?

echo "k6 load test done !!!"
11 changes: 11 additions & 0 deletions benchmark/utils/run-on-server-ready.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@
#!/usr/bin/env bash
set -eu

url=$1
command=$2
# wait for an HTTP server to come up and runs the selected command
while true; do
if [[ $(curl -fso /dev/null -w "%{http_code}" "${url}") = 200 ]]; then
bash -c "${command}"
fi
done
Loading
Loading