This document helps you to setup a development environment so you can contribute to KubeArchive. It also contain instructions to run integration tests, remote IDE debugging and other processes.
Install these tools:
- Create your fork of KubeArchive following this guide.
- Clone it to your computer:
git clone [email protected]:${YOUR_GITHUB_USERNAME}/kubearchive.git cd kubearchive git remote add upstream https://github.com/kubearchive/kubearchive.git git remote set-url --push upstream no_push
-
Set up a Kubernetes cluster with KinD:
kind create cluster
-
Set up
ko
to upload images to the KinD cluster, or any other registry you want to use:export KO_DOCKER_REPO="kind.local"
-
Install knative-eventing core and cert-manager, wait for them to be ready, and enable new-apiserversource-filters:
export CERT_MANAGER_VERSION=v1.9.1 export KNATIVE_EVENTING_VERSION=v1.15.0 kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/${CERT_MANAGER_VERSION}/cert-manager.yaml kubectl apply -f https://github.com/knative/eventing/releases/download/knative-${KNATIVE_EVENTING_VERSION}/eventing-core.yaml kubectl rollout status deployment --namespace=cert-manager --timeout=30s kubectl rollout status deployment --namespace=knative-eventing --timeout=30s kubectl apply -f - << EOF apiVersion: v1 kind: ConfigMap metadata: name: config-features namespace: knative-eventing labels: eventing.knative.dev/release: devel knative.dev/config-propagation: original knative.dev/config-category: eventing data: new-apiserversource-filters: enabled EOF
-
Generate operator code:
cmd/operator/generate.sh
-
Use Helm to install KubeArchive:
helm install kubearchive charts/kubearchive --create-namespace -n kubearchive \ --set-string apiServer.image=$(ko build github.com/kubearchive/kubearchive/cmd/api) \ --set-string sink.image=$(ko build github.com/kubearchive/kubearchive/cmd/sink) \ --set-string operator.image=$(ko build github.com/kubearchive/kubearchive/cmd/operator)
-
Check that the KubeArchive deployments are Ready:
kubectl get -n kubearchive deployments
-
List the deployed helm chart
helm list -n kubearchive
After you make changes to the code use Helm to redeploy KubeArchive:
helm upgrade kubearchive charts/kubearchive -n kubearchive \
--set-string apiServer.image=$(ko build github.com/kubearchive/kubearchive/cmd/api) \
--set-string sink.image=$(ko build github.com/kubearchive/kubearchive/cmd/sink) \
--set-string operator.image=$(ko build github.com/kubearchive/kubearchive/cmd/operator)
helm uninstall -n kubearchive kubearchive
- In a new terminal tab create a port-forward.
kubectl port-forward -n kubearchive svc/kubearchive-database 5432:5432
- Populate the database with test objects.
go run database/init_db.go
By default, KubeArchive listens to Event
s in the test
namespace.
- Generate some activity creating a pod:
kubectl run -n test busybox --image=busybox
- Follow the logs on the KubeArchive sink:
kubectl logs -n kubearchive -l app=kubearchive-sink -f
-
Use
kubectl
to port forward, this will keep the terminal occupied:kubectl port-forward -n kubearchive svc/kubearchive-api-server 8081:8081
-
Get the Certificate Authority (CA) from the
kubearchive-api-server-tls
secret:kubectl get -n kubearchive secrets kubearchive-api-server-tls -o jsonpath='{.data.ca\.crt}' | base64 -d > ca.crt
-
[ Optional ] Create a service account with a specific role to test the REST API. This Helm chart already provides
kubearchive-test-sa
withview
privileges for testing purposes. -
On a new terminal, use
curl
or your browser to perform a query:curl -s --cacert ca.crt -H "Authorization: Bearer $(kubectl create token kubearchive-test -n kubearchive)" \ https://localhost:8081/apis/batch/v1/jobs | jq
-
Check the new logs on the KubeArchive API:
kubectl logs -n kubearchive -l app=kubearchive-api-server
-
Use
kubectl
to port forward, this will keep the terminal occupied:kubectl port-forward -n kubearchive svc/kubearchive-api-server 8081:8081
-
Get the Certificate Authority (CA) from the
kubearchive-api-server-tls
secret:kubectl get -n kubearchive secrets kubearchive-api-server-tls -o jsonpath='{.data.ca\.crt}' | base64 -d > ca.crt
-
Run the CLI:
go run cmd/kubectl-archive/main.go get batch/v1/jobs --token $(kubectl -n kubearchive create token kubearchive-test)
-
Generate a new job, and run again:
kubectl create job my-job --image=busybox go run cmd/kubectl-archive/main.go get batch/v1/jobs --token $(kubectl -n kubearchive create token kubearchive-test)
Use go test
to run the integration test suite:
go test -v ./test/integration -tags=integration
Use delve to start a debugger to which attach from your IDE.
-
Deploy the chart with
ko
andhelm
in debug mode using an image withdelve
:helm install kubearchive charts/kubearchive --create-namespace -n kubearchive \ --set apiServer.debug=true \ --set-string apiServer.image=$(KO_DEFAULTBASEIMAGE=gcr.io/k8s-skaffold/skaffold-debug-support/go:latest ko build --disable-optimizations github.com/kubearchive/kubearchive/cmd/api) \ --set-string sink.image=$(ko build github.com/kubearchive/kubearchive/cmd/sink) \ --set-string operator.image=$(ko build github.com/kubearchive/kubearchive/cmd/operator)
-
Forward the ports 8081 and 40000 from the Pod directly:
kubectl port-forward -n kubearchive svc/kubearchive-api-server 8081:8081 40000:40000
-
Enable breakpoints in your IDE.
-
Connect to the process using the port 40000:
-
Query the API using
curl
or your browser:curl -s --cacert ca.crt -H "Authorization: Bearer $(kubectl create token kubearchive-test -n kubearchive)" \ https://localhost:8081/apis/batch/v1/jobs | jq
- Using KinD and podman. If you get this error:
try creating the cluster with this command:
ERROR: failed to create cluster: running kind with rootless provider requires setting systemd property "Delegate=yes", see https://kind.sigs.k8s.io/docs/user/rootless/
systemd-run -p Delegate=yes --user --scope kind create cluster
- Using KinD and Podman Desktop. If you get this error:
expose the
Error: failed to publish images: error publishing ko://github.com/kubearchive/kubearchive/cmd/api: no nodes found for cluster "kind"
KIND_CLUSTER_NAME
env variable with the appropriate name of the kind cluster:NOTE: In case you have more than one kind cluster running, manually set the proper oneexport KIND_CLUSTER_NAME=$(kind -q get clusters)