Skip to content

Commit 8ead840

Browse files
zeme-wanav0d1ch
authored andcommitted
Add workflow for longitudinal benchmarking (IntersectMBO#5205)
* New benchmark workflow * alert threshold * upd * Addressed some comments * WIP * Applied requested changes
1 parent 1f73baf commit 8ead840

File tree

2 files changed

+78
-0
lines changed

2 files changed

+78
-0
lines changed
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,55 @@
1+
# Longitudinal Benchmarks
2+
#
3+
# This workflow will run the benchmarks defined in the environment variable BENCHMARKS.
4+
# It will collect and aggreate the benchmark output, format it and feed it to github-action-benchmark.
5+
#
6+
# The benchmark charts are live at https://input-output-hk.github.io/plutus/dev/bench
7+
# The benchmark data is available at https://input-output-hk.github.io/plutus/dev/bench/data.js
8+
9+
name: Longitudinal Benchmarks
10+
11+
on:
12+
push:
13+
branches:
14+
- master
15+
16+
permissions:
17+
# Deployments permission to deploy GitHub pages website
18+
deployments: write
19+
# Contents permission to update benchmark contents in gh-pages branch
20+
contents: write
21+
22+
jobs:
23+
longitudinal-benchmarks:
24+
name: Performance regression check
25+
runs-on: [self-hosted, plutus-benchmark]
26+
steps:
27+
- uses: actions/[email protected]
28+
29+
- name: Run benchmarks
30+
env:
31+
BENCHMARKS: "validation validation-decode"
32+
run: |
33+
for bench in $BENCHMARKS; do
34+
2>&1 cabal run "$bench" | tee "$bench-output.txt"
35+
done
36+
python ./scripts/format-benchmark-output.py
37+
38+
39+
- name: Store benchmark result
40+
uses: benchmark-action/[email protected]
41+
with:
42+
name: Plutus Benchmarks
43+
tool: 'customSmallerIsBetter'
44+
output-file-path: output.json
45+
github-token: ${{ secrets.GITHUB_TOKEN }}
46+
# Push and deploy GitHub pages branch automatically
47+
auto-push: true
48+
# Enable alert commit comment
49+
comment-on-alert: true
50+
# Mention @input-output-hk/plutus-core in the commit comment
51+
alert-comment-cc-users: '@input-output-hk/plutus-core'
52+
# Percentage value like "110%".
53+
# It is a ratio indicating how worse the current benchmark result is.
54+
# For example, if we now get 110 ns/iter and previously got 100 ns/iter, it gets 110% worse.
55+
alert-threshold: '105%'

scripts/format-benchmark-output.py

+23
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,23 @@
1+
import json
2+
import os
3+
4+
result = []
5+
6+
for benchmark in os.getenv("BENCHMARKS").split():
7+
with open(f"{benchmark}-output.txt", "r") as file:
8+
name = ""
9+
for line in file.readlines():
10+
if line.startswith("benchmarking"):
11+
name = line.split()[1]
12+
elif line.startswith("mean"):
13+
parts = line.split()
14+
mean = parts[1]
15+
unit = parts[2]
16+
result.append({
17+
"name": f"{benchmark}-{name}",
18+
"unit": unit,
19+
"value": float(mean)
20+
})
21+
22+
with open("output.json", "w") as file:
23+
json.dump(result, file)

0 commit comments

Comments
 (0)