balena deployment of self-hosted GitHub runners
Runners are deployed in two variants, vm
and container
, where vm
is
isolated and safe to use on public repositories.
See github-runner-vm and self-hosted-runners for image sources.
Firecracker allows overprovisioning or oversubscribing of both CPU and memory resources for virtual machines (VMs) running on a host. This means that the total vCPUs and memory allocated to the VMs can exceed the actual physical CPU cores and memory available on the host machine.
In order to make the most efficient use of host resources, we want to slightly underprovision the host hardware so if/when all allocated resources are consumed by jobs (e.g. yocto) there should be no overlap that could lead to performance degradation.
See the github-runner-vm README for more.
Note
balenaOS can be deployed into Hetzner Robot
-
Order a suitable machine in an
ES rack
(remote power controls) -
Download balenaOS production image from the target balenaCloud fleet:
-
For x64 only: Unwrap the image
-
Copy unwrapped image to S3 playground bucket and make public
aws s3 cp balena.img s3://{{bucket}}/ --acl public-read
-
Activate Hetzner Rescue system
-
Reboot or reset server
Note
This leaves the second block device unpaired and empty
-
Download and uncompress unwrapped balenaOS image to
/tmp
usingwget
-
(Optional) Zero out target disk(s)
for device in nvme{0,1}n1; do blkdiscard /dev/${device} -f done
-
Download image from S3 via wget (URL is in S3 dashboard)
-
Write image to disk (Check
lsblk
output for block device)dd if=balena.img of=/dev/nvme1n1 bs=$(blockdev --getbsz /dev/nvme1n1)
-
Reboot
-
Manually power cycle again via the Robot dashboard to work around this issue
Note
Use generic-amd64
or generic-aarch64
balenaOS device type
-
Follow RAID1 setup steps here
-
Download image from S3 via wget (URL is in S3 dashboard)
-
Write image to RAID array
dd if=balena.img of=/dev/md/balena bs=4096
-
Monitor synchronization progress
watch cat /proc/mdstat
-
Reboot when 100% synchronized
-
Manually power cycle again via the Robot dashboard to work around this issue