Skip to content

Latest commit

 

History

History
502 lines (373 loc) · 23.3 KB

0-prereqs.md

File metadata and controls

502 lines (373 loc) · 23.3 KB

Prerequisites

Minimum Resources

Please ensure you have at least: 4 CPU, 16 GB RAM and 32 GB free disk space.

Warning: Windows instructions are best effort and this tutorial may cause instability on some systems.

**

Table of Contents

Linux

Install Docker, kubectl, kind, clusterctl and helm

Install Docker as documented on the Docker website. You can choose between:

Verify the Docker installation via:

docker version
docker ps

Note: Fedora users should ensure to correctly configure Docker before continuing. Note: If you are using Docker Desktop please ensure the Docker VM has at least 4 CPU, 10 GB RAM and 32 GB disk. Note: You should ensure that some system settings are correct before continuing

Install kubectl as documented in Install and Set Up kubectl on Linux.

Verify kubectl via:

kubectl version --client -o yaml

At the time of this writing the above link will guide you to download a version 1.25 of the kubectl binary. Based on the official Kubernetes version skew policy you will be able to use either 1.24, 1.25, or 1.26 of kubectl to follow the tutorial, which will have you create and upgrade Kubernetes clusters running versions 1.24 and 1.25.

Install kind v0.16.0 by downloading it from the kind release page and adding it to the path.

curl -L https://github.com/kubernetes-sigs/kind/releases/download/v0.16.0/kind-linux-amd64 -o /tmp/kind
sudo install -o root -g root -m 0755 /tmp/kind /usr/local/bin/kind

kind version

Install clusterctl v1.2.4 by downloading it from the ClusterAPI release page and adding it to the path.

curl -L https://github.com/kubernetes-sigs/cluster-api/releases/download/v1.2.4/clusterctl-linux-amd64 -o /tmp/clusterctl
sudo install -o root -g root -m 0755 /tmp/clusterctl /usr/local/bin/clusterctl

clusterctl version

Install helm v3.10.0 by downloading it from the Helm release page and adding it to the path.

curl -L https://get.helm.sh/helm-v3.10.0-linux-amd64.tar.gz -o /tmp/helm.tar.gz
tar -zxvf /tmp/helm.tar.gz -C /tmp
sudo install -o root -g root -m 0755 /tmp/linux-amd64/helm /usr/local/bin/helm

helm version

Clone the tutorial repository

git clone https://github.com/ykakarap/kubecon-na-22-capi-lab
cd kubecon-na-22-capi-lab

export CLUSTERCTL_REPOSITORY_PATH=$(pwd)/clusterctl/repository

Notes:

  • The CLUSTERCTL_REPOSITORY_PATH environment variable is required later so we're able to run the tutorial offline.
  • You can also download the repository via this link if you don't have git installed: main.zip.

Pre-download container images

As we don't want to rely on the conference WiFi please pre-pull the container images used in the tutorial via:

sh ./scripts/prepull-images.sh

Note: There will be probably issues with rate-limiting on DockerHub because of the conference Wi-Fi. To avoid them you have to log into docker with your DockerHub account.

Verification

This section describes steps to verify everything has been installed correctly.

Create the kind cluster: (including pre-loading images)

sh ./scripts/create-kind-cluster.sh

Should return:

Creating cluster "kubecon-na-22-capi-lab" ...
 ✓ Ensuring node image (kindest/node:v1.25.2) 🖼
 ✓ Preparing nodes 📦
 ✓ Writing configuration 📜
 ✓ Starting control-plane 🕹️
 ✓ Installing CNI 🔌
 ✓ Installing StorageClass 💾
Set kubectl context to "kind-kubecon-na-22-capi-lab"
You can now use your cluster with:

kubectl cluster-info --context kind-kubecon-na-22-capi-lab

Thanks for using kind! 😊
Load pre-downloaded images into kind cluster
Image: "gcr.io/k8s-staging-cluster-api/capd-manager:v1.2.4" with ID "sha256:ce58906cdf5645b9a74274d85b56acc717c29be16019732fd7a647ad898dadc8" not yet present on node "kubecon-na-22-capi-lab-control-plane", loading...
Image: "k8s.gcr.io/cluster-api/cluster-api-controller:v1.2.4" with ID "sha256:59a7be1f86721c75bceb6d9a31f12846ae9c6984130301b943bb4bc90a9a8f95" not yet present on node "kubecon-na-22-capi-lab-control-plane", loading...
Image: "k8s.gcr.io/cluster-api/kubeadm-bootstrap-controller:v1.2.4" with ID "sha256:b0fa2436bbfa2e6c9f60175b82c0cb9d98e8d77c9659d9437d224cf25ec80000" not yet present on node "kubecon-na-22-capi-lab-control-plane", loading...
Image: "k8s.gcr.io/cluster-api/kubeadm-control-plane-controller:v1.2.4" with ID "sha256:bb531d56d11c3086b5b03db7e9e42f68b003fa39ad07d1ce6a8d22e669f8c23b" not yet present on node "kubecon-na-22-capi-lab-control-plane", loading...
Image: "quay.io/jetstack/cert-manager-cainjector:v1.9.1" with ID "sha256:11778d29f8cc283a72a84fbd68601a631fc7705fe2f12a70ea5df7ca3262dfe9" not yet present on node "kubecon-na-22-capi-lab-control-plane", loading...
Image: "quay.io/jetstack/cert-manager-controller:v1.9.1" with ID "sha256:8eaca4249b016e1e355957d357a39a0a8a837e1837054e8762fe7d1cd13051af" not yet present on node "kubecon-na-22-capi-lab-control-plane", loading...
Image: "quay.io/jetstack/cert-manager-webhook:v1.9.1" with ID "sha256:d3348bcdc1e7e39e655c3b17106fe2e2038cfd70d080a3ac89a9eaf3bd26fc3d" not yet present on node "kubecon-na-22-capi-lab-control-plane", loading...
Image: "gcr.io/kakaraparthy-devel/cluster-api-visualizer:v1.0.0" with ID "sha256:76f45f9fdeb341ab49094bc424dda68acb2c2e22f08b63c9b8855fe42b620f17" not yet present on node "kubecon-na-22-capi-lab-control-plane", loading...
Image: "gcr.io/kakaraparthy-devel/test-extension:v1.0.1" with ID "sha256:5f09148f8fbfcffea6738f07e60205b477dcec37df31e49ad8432886ec46f29d" not yet present on node "kubecon-na-22-capi-lab-control-plane", loading...

Note The following error can be ignored: ERROR: failed to load image: command "docker exec --privileged ... already exists as the image load works even if this error occurs.

Test kubectl:

kubectl get node

Should return:

NAME                                   STATUS   ROLES           AGE    VERSION
kubecon-na-22-capi-lab-control-plane   Ready    control-plane   3m5s   v1.25.2

Delete the kind cluster:

kind delete cluster --name=kubecon-na-22-capi-lab

Next: Creating Your First Cluster With Cluster API

macOS

Install Docker, kubectl, kind, clusterctl and helm

Install Docker Desktop as documented in Install Docker Desktop on Mac.

Verify the Docker installation via:

docker version
docker ps

Note: Please ensure the Docker VM has at least 4 CPU, 10 GB RAM and 32 GB disk.

Install kubectl as documented in Install and Set Up kubectl on macOS.

Verify kubectl via:

kubectl version --client -o yaml

At the time of this writing the above link will guide you to download a version 1.25 of the kubectl binary. Based on the official Kubernetes version skew policy you will be able to use either 1.24, 1.25, or 1.26 of kubectl to follow the tutorial, which will have you create and upgrade Kubernetes clusters running versions 1.24 and 1.25.

Install kind v0.16.0 by downloading it from the kind release page and adding it to the path.

For amd64:

curl -L https://github.com/kubernetes-sigs/kind/releases/download/v0.16.0/kind-darwin-amd64 -o /tmp/kind
chmod +x /tmp/kind
sudo mv /tmp/kind /usr/local/bin/kind
sudo chown root: /usr/local/bin/kind

kind version

For arm64: (if your Mac has an M1 CPU (”Apple Silicon”))

curl -L https://github.com/kubernetes-sigs/kind/releases/download/v0.16.0/kind-darwin-arm64 -o /tmp/kind
chmod +x /tmp/kind
sudo mv /tmp/kind /usr/local/bin/kind
sudo chown root: /usr/local/bin/kind

kind version

Install clusterctl v1.2.4 by downloading it from the ClusterAPI release page and adding it to the path.

For amd64:

curl -L https://github.com/kubernetes-sigs/cluster-api/releases/download/v1.2.4/clusterctl-darwin-amd64 -o /tmp/clusterctl
chmod +x /tmp/clusterctl
sudo mv /tmp/clusterctl /usr/local/bin/clusterctl
sudo chown root: /usr/local/bin/clusterctl

clusterctl version

For arm64: (if your Mac has an M1 CPU (”Apple Silicon”))

curl -L https://github.com/kubernetes-sigs/cluster-api/releases/download/v1.2.4/clusterctl-darwin-arm64 -o /tmp/clusterctl
chmod +x /tmp/clusterctl
sudo mv /tmp/clusterctl /usr/local/bin/clusterctl
sudo chown root: /usr/local/bin/clusterctl

clusterctl version

Install helm v3.10.0 by downloading it from the Helm release page and adding it to the path.

For amd64:

curl -L https://get.helm.sh/helm-v3.10.0-darwin-amd64.tar.gz -o /tmp/helm.tar.gz
tar -zxvf /tmp/helm.tar.gz -C /tmp
chmod +x /tmp/darwin-amd64/helm
sudo mv /tmp/darwin-amd64/helm /usr/local/bin/helm
sudo chown root: /usr/local/bin/helm

helm version

For arm64: (if your Mac has an M1 CPU (”Apple Silicon”))

curl -L https://get.helm.sh/helm-v3.10.0-darwin-arm64.tar.gz -o /tmp/helm.tar.gz
tar -zxvf /tmp/helm.tar.gz -C /tmp
chmod +x /tmp/darwin-arm64/helm
sudo mv /tmp/darwin-arm64/helm /usr/local/bin/helm
sudo chown root: /usr/local/bin/helm

helm version

Clone the tutorial repository

git clone https://github.com/ykakarap/kubecon-na-22-capi-lab
cd kubecon-na-22-capi-lab

export CLUSTERCTL_REPOSITORY_PATH=$(pwd)/clusterctl/repository

Notes:

  • The CLUSTERCTL_REPOSITORY_PATH environment variable is required later so we're able to run the tutorial offline.
  • You can also download the repository via this link if you don't have git installed: main.zip.

Pre-download container images

As we don't want to rely on the conference WiFi please pre-pull the container images used in the tutorial via:

sh ./scripts/prepull-images.sh

Note: There will be probably issues with rate-limiting on DockerHub because of the conference Wi-Fi. To avoid them you have to log into docker with your DockerHub account.

Verification

This section describes steps to verify everything has been installed correctly.

Create the kind cluster: (including pre-loading images)

sh ./scripts/create-kind-cluster.sh

Should return:

Creating cluster "kubecon-na-22-capi-lab" ...
 ✓ Ensuring node image (kindest/node:v1.25.2) 🖼
 ✓ Preparing nodes 📦
 ✓ Writing configuration 📜
 ✓ Starting control-plane 🕹️
 ✓ Installing CNI 🔌
 ✓ Installing StorageClass 💾
Set kubectl context to "kind-kubecon-na-22-capi-lab"
You can now use your cluster with:

kubectl cluster-info --context kind-kubecon-na-22-capi-lab

Thanks for using kind! 😊
Load pre-downloaded images into kind cluster
Image: "gcr.io/k8s-staging-cluster-api/capd-manager:v1.2.4" with ID "sha256:ce58906cdf5645b9a74274d85b56acc717c29be16019732fd7a647ad898dadc8" not yet present on node "kubecon-na-22-capi-lab-control-plane", loading...
Image: "k8s.gcr.io/cluster-api/cluster-api-controller:v1.2.4" with ID "sha256:59a7be1f86721c75bceb6d9a31f12846ae9c6984130301b943bb4bc90a9a8f95" not yet present on node "kubecon-na-22-capi-lab-control-plane", loading...
Image: "k8s.gcr.io/cluster-api/kubeadm-bootstrap-controller:v1.2.4" with ID "sha256:b0fa2436bbfa2e6c9f60175b82c0cb9d98e8d77c9659d9437d224cf25ec80000" not yet present on node "kubecon-na-22-capi-lab-control-plane", loading...
Image: "k8s.gcr.io/cluster-api/kubeadm-control-plane-controller:v1.2.4" with ID "sha256:bb531d56d11c3086b5b03db7e9e42f68b003fa39ad07d1ce6a8d22e669f8c23b" not yet present on node "kubecon-na-22-capi-lab-control-plane", loading...
Image: "quay.io/jetstack/cert-manager-cainjector:v1.9.1" with ID "sha256:11778d29f8cc283a72a84fbd68601a631fc7705fe2f12a70ea5df7ca3262dfe9" not yet present on node "kubecon-na-22-capi-lab-control-plane", loading...
Image: "quay.io/jetstack/cert-manager-controller:v1.9.1" with ID "sha256:8eaca4249b016e1e355957d357a39a0a8a837e1837054e8762fe7d1cd13051af" not yet present on node "kubecon-na-22-capi-lab-control-plane", loading...
Image: "quay.io/jetstack/cert-manager-webhook:v1.9.1" with ID "sha256:d3348bcdc1e7e39e655c3b17106fe2e2038cfd70d080a3ac89a9eaf3bd26fc3d" not yet present on node "kubecon-na-22-capi-lab-control-plane", loading...
Image: "gcr.io/kakaraparthy-devel/cluster-api-visualizer:v1.0.0" with ID "sha256:76f45f9fdeb341ab49094bc424dda68acb2c2e22f08b63c9b8855fe42b620f17" not yet present on node "kubecon-na-22-capi-lab-control-plane", loading...
Image: "gcr.io/kakaraparthy-devel/test-extension:v1.0.1" with ID "sha256:5f09148f8fbfcffea6738f07e60205b477dcec37df31e49ad8432886ec46f29d" not yet present on node "kubecon-na-22-capi-lab-control-plane", loading...

Note The following error can be ignored: ERROR: failed to load image: command "docker exec --privileged ... already exists as the image load works even if this error occurs.

Test kubectl:

kubectl get node

Should return:

NAME                                   STATUS   ROLES           AGE    VERSION
kubecon-na-22-capi-lab-control-plane   Ready    control-plane   3m5s   v1.25.2

Delete the kind cluster:

kind delete cluster --name=kubecon-na-22-capi-lab

Next: Creating Your First Cluster With Cluster API

Windows

Note: Windows instructions are based on Windows 11 with Powershell and Docker Desktop 4.10.1 using WSL 2.

Clone the tutorial repository

git clone https://github.com/ykakarap/kubecon-na-22-capi-lab
cd kubecon-na-22-capi-lab

$env:CLUSTERCTL_REPOSITORY_PATH = ([System.Uri](Get-Item .).FullName).AbsoluteUri + "/clusterctl/repository"

Notes:

  • The CLUSTERCTL_REPOSITORY_PATH environment variable is required later so we're able to run the tutorial offline.
  • You can also download the repository via this link if you don't have git installed: main.zip.

Put the tutorial repo on the $PATH

This tutorial uses the base of the tutorial repo to place and execute binaries. To add the current directory - which should be kubecon-na-22-capi-lab to the $PATH run:

$env:path = (Get-Item .).FullName + ';' + $env:path

Install Docker, kubectl, kind, clusterctl and helm

Install Docker Desktop 4.10.1 on Windows. Note: This tutorial works best on Docker Desktop 4.10.1. Newer versions of Docker Desktop may crash and cause system instability when running Cluster API. Verify the Docker installation via:

docker version
docker ps

Note: Please ensure the Docker VM has at least 4 CPU, 10 GB RAM and 32 GB disk.

Install kubectl as documented in Install and Set Up kubectl on Windows.

Verify kubectl via:

kubectl version --client -o yaml

At the time of this writing the above link will guide you to download a version 1.25 of the kubectl binary. Based on the official Kubernetes version skew policy you will be able to use either 1.24, 1.25, or 1.26 of kubectl to follow the tutorial, which will have you create and upgrade Kubernetes clusters running versions 1.24 and 1.25.

Install kind v0.16.0 by downloading it from the kind release page and adding it to the path.

curl.exe -L https://github.com/kubernetes-sigs/kind/releases/download/v0.16.0/kind-windows-amd64 -o kind.exe
# Note: If you don't have curl installed, just download the binary manually and rename it to kind.exe.

kind version

Install clusterctl v1.2.4 by downloading it from the ClusterAPI release page and adding it to the path.

curl.exe -L https://github.com/kubernetes-sigs/cluster-api/releases/download/v1.2.4/clusterctl-windows-amd64.exe -o clusterctl.exe
# Note: If you don't have curl installed, just download the binary manually and rename it to clusterctl.exe.

clusterctl version

Install helm v3.10.0 by downloading it from the Helm release page and adding it to the path.

# amd64
curl.exe -L https://get.helm.sh/helm-v3.10.0-windows-amd64.zip -o ./helm.zip
Expand-Archive ./helm.zip ./helm
mv .\helm\windows-amd64\helm.exe .

helm version

Pre-download container images

As we don't want to rely on the conference WiFi please pre-pull the container images used in the tutorial via:

.\scripts\prepull-images.ps1

Note You might have to enable running scripts by executing Set-ExecutionPolicy Unrestricted in a PowerShell run as Administrator.

Note: There will be probably issues with rate-limiting on DockerHub because of the conference Wi-Fi. To avoid them you have to log into docker with your DockerHub account.

Verification

This section describes steps to verify everything has been installed correctly.

Create the kind cluster: (including pre-loading images)

.\scripts\create-kind-cluster.ps1

Should return:

Creating cluster "kubecon-na-22-capi-lab" ...
 ✓ Ensuring node image (kindest/node:v1.25.2) 🖼
 ✓ Preparing nodes 📦
 ✓ Writing configuration 📜
 ✓ Starting control-plane 🕹️
 ✓ Installing CNI 🔌
 ✓ Installing StorageClass 💾
Set kubectl context to "kind-kubecon-na-22-capi-lab"
You can now use your cluster with:

kubectl cluster-info --context kind-kubecon-na-22-capi-lab

Thanks for using kind! 😊
Load pre-downloaded images into kind cluster
Image: "gcr.io/k8s-staging-cluster-api/capd-manager:v1.2.4" with ID "sha256:ce58906cdf5645b9a74274d85b56acc717c29be16019732fd7a647ad898dadc8" not yet present on node "kubecon-na-22-capi-lab-control-plane", loading...
Image: "k8s.gcr.io/cluster-api/cluster-api-controller:v1.2.4" with ID "sha256:59a7be1f86721c75bceb6d9a31f12846ae9c6984130301b943bb4bc90a9a8f95" not yet present on node "kubecon-na-22-capi-lab-control-plane", loading...
Image: "k8s.gcr.io/cluster-api/kubeadm-bootstrap-controller:v1.2.4" with ID "sha256:b0fa2436bbfa2e6c9f60175b82c0cb9d98e8d77c9659d9437d224cf25ec80000" not yet present on node "kubecon-na-22-capi-lab-control-plane", loading...
Image: "k8s.gcr.io/cluster-api/kubeadm-control-plane-controller:v1.2.4" with ID "sha256:bb531d56d11c3086b5b03db7e9e42f68b003fa39ad07d1ce6a8d22e669f8c23b" not yet present on node "kubecon-na-22-capi-lab-control-plane", loading...
Image: "quay.io/jetstack/cert-manager-cainjector:v1.9.1" with ID "sha256:11778d29f8cc283a72a84fbd68601a631fc7705fe2f12a70ea5df7ca3262dfe9" not yet present on node "kubecon-na-22-capi-lab-control-plane", loading...
Image: "quay.io/jetstack/cert-manager-controller:v1.9.1" with ID "sha256:8eaca4249b016e1e355957d357a39a0a8a837e1837054e8762fe7d1cd13051af" not yet present on node "kubecon-na-22-capi-lab-control-plane", loading...
Image: "quay.io/jetstack/cert-manager-webhook:v1.9.1" with ID "sha256:d3348bcdc1e7e39e655c3b17106fe2e2038cfd70d080a3ac89a9eaf3bd26fc3d" not yet present on node "kubecon-na-22-capi-lab-control-plane", loading...
Image: "gcr.io/kakaraparthy-devel/cluster-api-visualizer:v1.0.0" with ID "sha256:76f45f9fdeb341ab49094bc424dda68acb2c2e22f08b63c9b8855fe42b620f17" not yet present on node "kubecon-na-22-capi-lab-control-plane", loading...
Image: "gcr.io/kakaraparthy-devel/test-extension:v1.0.1" with ID "sha256:5f09148f8fbfcffea6738f07e60205b477dcec37df31e49ad8432886ec46f29d" not yet present on node "kubecon-na-22-capi-lab-control-plane", loading...

Note The following error can be ignored: ERROR: failed to load image: command "docker exec --privileged ... already exists as the image load works even if this error occurs.

Test kubectl:

kubectl get node

Should return:

NAME                                   STATUS   ROLES           AGE    VERSION
kubecon-na-22-capi-lab-control-plane   Ready    control-plane   3m5s   v1.25.2

Delete the kind cluster:

kind delete cluster --name=kubecon-na-22-capi-lab

Avoid GitHub rate-limiting when running the tutorial without the local clusterctl repository

Note: The tutorial uses a local clusterctl repository, so these steps are not required to run the tutorial, they are just documented in case folks want to run the tutorial without the local repository.

clusterctl accesses GitHub to install Cluster API, to avoid rate-limiting please set up a GitHub token.

First, create a token as documented on the GitHub website (no permissions needed)

Export the GITHUB_TOKEN in your environment

Linux and macOS:

export GITHUB_TOKEN=<GITHUB_TOKEN>

Windows:

$env:GITHUB_TOKEN = "<GITHUB_TOKEN>"

Next: Creating Your First Cluster With Cluster API

Next: Creating Your First Cluster With Cluster API

Now that you've prepared your local environment, let's build our first Kubernetes cluster using Cluster API!