Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add workflow for longitudinal benchmarking #5205

Merged
merged 6 commits into from
Mar 20, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
55 changes: 55 additions & 0 deletions .github/workflows/longitudinal-benchmark.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,55 @@
# Longitudinal Benchmarks
#
# This workflow will run the benchmarks defined in the environment variable BENCHMARKS.
# It will collect and aggreate the benchmark output, format it and feed it to github-action-benchmark.
#
# The benchmark charts are live at https://input-output-hk.github.io/plutus/dev/bench
# The benchmark data is available at https://input-output-hk.github.io/plutus/dev/bench/data.js

name: Longitudinal Benchmarks

on:
push:
branches:
- master

permissions:
# Deployments permission to deploy GitHub pages website
deployments: write
# Contents permission to update benchmark contents in gh-pages branch
contents: write

jobs:
longitudinal-benchmarks:
name: Performance regression check
runs-on: [self-hosted, plutus-benchmark]
steps:
- uses: actions/[email protected]

- name: Run benchmarks
env:
BENCHMARKS: "validation validation-decode"
run: |
for bench in $BENCHMARKS; do
2>&1 cabal run "$bench" | tee "$bench-output.txt"
done
python ./scripts/format-benchmark-output.py


- name: Store benchmark result
uses: benchmark-action/[email protected]
with:
name: Plutus Benchmarks
tool: 'customSmallerIsBetter'
output-file-path: output.json
github-token: ${{ secrets.GITHUB_TOKEN }}
# Push and deploy GitHub pages branch automatically
auto-push: true
# Enable alert commit comment
comment-on-alert: true
# Mention @input-output-hk/plutus-core in the commit comment
alert-comment-cc-users: '@input-output-hk/plutus-core'
# Percentage value like "110%".
# It is a ratio indicating how worse the current benchmark result is.
# For example, if we now get 110 ns/iter and previously got 100 ns/iter, it gets 110% worse.
alert-threshold: '105%'
23 changes: 23 additions & 0 deletions scripts/format-benchmark-output.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,23 @@
import json
import os

result = []

for benchmark in os.getenv("BENCHMARKS").split():
with open(f"{benchmark}-output.txt", "r") as file:
name = ""
for line in file.readlines():
if line.startswith("benchmarking"):
name = line.split()[1]
elif line.startswith("mean"):
parts = line.split()
mean = parts[1]
unit = parts[2]
result.append({
"name": f"{benchmark}-{name}",
"unit": unit,
"value": float(mean)
})

with open("output.json", "w") as file:
json.dump(result, file)