Skip to content
This repository has been archived by the owner on Feb 3, 2023. It is now read-only.

Commit

Permalink
Merge pull request #4 from bbetter173/feature/reconciliation
Browse files Browse the repository at this point in the history
  • Loading branch information
clrxbl authored Aug 6, 2022
2 parents 294db87 + ea15897 commit 3e47e9f
Show file tree
Hide file tree
Showing 31 changed files with 1,326 additions and 430 deletions.
42 changes: 42 additions & 0 deletions .github/workflows/helm-lint.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,42 @@
name: Lint and Test Charts

on: pull_request

jobs:
lint-test:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v2
with:
fetch-depth: 0

- name: Set up Helm
uses: azure/setup-helm@v1
with:
version: v3.9.2

- uses: actions/setup-python@v2
with:
python-version: 3.7

- name: Set up chart-testing
uses: helm/[email protected]

- name: Run chart-testing (list-changed)
id: list-changed
run: |
changed=$(ct list-changed --target-branch ${{ github.event.repository.default_branch }})
if [[ -n "$changed" ]]; then
echo "::set-output name=changed::true"
fi
- name: Run chart-testing (lint)
run: ct lint --target-branch ${{ github.event.repository.default_branch }}

- name: Create kind cluster
uses: helm/[email protected]
if: steps.list-changed.outputs.changed == 'true'

- name: Run chart-testing (install)
run: ct install --target-branch ${{ github.event.repository.default_branch }}
31 changes: 31 additions & 0 deletions .github/workflows/helm-release.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,31 @@
name: Helm Publish

on:
push:
branches:
- main
jobs:
release:
permissions:
contents: write
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v2
with:
fetch-depth: 0

- name: Configure Git
run: |
git config user.name "$GITHUB_ACTOR"
git config user.email "[email protected]"
- name: Install Helm
uses: azure/setup-helm@v1
with:
version: v3.8.1

- name: Run chart-releaser
uses: helm/[email protected]
env:
CR_TOKEN: "${{ secrets.GITHUB_TOKEN }}"
11 changes: 6 additions & 5 deletions Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -14,14 +14,15 @@ RUN apk add --no-cache --virtual .python_deps build-base python3-dev libffi-dev
mkdir -p /app/src /app && \
poetry config virtualenvs.create false

ADD src /app/src
ADD pyproject.toml /app/pyproject.toml

WORKDIR /app
ENV PYTHONPATH=${PYTHONPATH}:/app

RUN apk add --no-cache --virtual .build_deps gcc g++ && \
cd /app && \
poetry install --no-dev && \
apk del .build_deps

ADD src /app/src

WORKDIR /app
ENV PYTHONPATH=${PYTHONPATH}:/app

CMD ["kopf", "run", "--all-namespaces", "--liveness=http://0.0.0.0:8080/health", "/app/src/tailscale_svc_lb_controller/main.py"]
20 changes: 20 additions & 0 deletions Makefile
Original file line number Diff line number Diff line change
@@ -0,0 +1,20 @@


REPOSITORY=clrxbl/tailscale-svc-lb
TAG=latest

build: build-controller build-runtime

push: push-controller push-runtime

build-controller:
docker build . -t $(REPOSITORY)-controller:$(TAG)

push-controller: build-controller
docker push $(REPOSITORY)-controller:$(TAG)

build-runtime:
cd runtime && docker build . -t $(REPOSITORY):$(TAG)

push-runtime: build-runtime
docker push $(REPOSITORY):$(TAG)
47 changes: 40 additions & 7 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,25 +12,58 @@ It deploys the controller & any svc-lb pods in the namespace where it's installe

Once the controller is deployed, create a LoadBalancer service with the loadBalancerClass set to "svc-lb.tailscale.iptables.sh/lb".

There should be a DaemonSet created in the controller's namespace for the newly-created LoadBalancer service. View the logs of the leader-elected pod and click the login.tailscale.com link to authenticate. You only have to do this once per service.
There should be a Deployment (or DaemonSet) created in the controller's namespace for the newly-created LoadBalancer service. View the logs of the leader-elected pod and click the login.tailscale.com link to authenticate. You only have to do this once per service.

This can be automated by creating a secret in the controller's namespace called `tailscale-svc-lb` with the key `ts-auth-key` and the value being your Tailscale's registration token.

## Configuration Variables

All configuration options are supplied using Environment Variables

| Variable | Description | Default |
|--------------------------------------|------------------------------------------------------------------------------------------------------------------------------------|------------------------------------------------|
| `RESOURCE_PREFIX` | Prefix to prepend to the service name when creating proxy resources | `ts-` |
| `SECRET_NAME` | Name of the secret that the `ts-auth-key` value should be used from | `tailscale-svc-lb` |
| `LOAD_BALANCER_CLASS` | LoadBalancerClass that this controller will implement | `svc-lb.tailscale.iptables.sh/lb` |
| `NODE_SELECTOR_LABEL` | Label to use when selecting nodes to run Tailscale on. The value of this label should be `true` | None |
| `IMAGE_PULL_SECRETS` | A semi-colon seperated list of secret names to use as the `imagePullSecrets` for the Tailscale Proxy | None |
| `DEPLOYMENT_TYPE` | The type of deployment to use for the Tailscale Proxy. Can be one of: `Deployment`, `DaemonSet` | `Deployment` |
| `TS_PROXY_NAMESPACE` | Namespace all of the Tailscale Proxies will be created in | `default` |
| `TS_PROXY_REPLICA_COUNT` | The number of replicas to deploy for each Tailscale Proxy instance. Only used if `DEPLOYMENT_TYPE` is `Deployment` | `1` |
| `TS_PROXY_RUNTIME_IMAGE` | Image to use as the Tailscale Proxy Runtime container | `clrxbl/tailscale-svc-lb-runtime:latest` |
| `TS_PROXY_RUNTIME_IMAGE_PULL_POLICY` | ImagePullPolicy to use for the Tailscale Proxy Runtime container | `IfNotPresent` |
| `TS_PROXY_RUNTIME_REQUEST_CPU` | CPU Request for the Tailscale Proxy Runtime container | None |
| `TS_PROXY_RUNTIME_REQUEST_MEM` | Memory Request for the Tailscale Proxy Runtime container | None |
| `TS_PROXY_RUNTIME_LIMIT_CPU` | CPU Limit for the Tailscale Proxy Runtime container | None |
| `TS_PROXY_RUNTIME_LIMIT_MEM` | Memory Limit for the Tailscale Proxy Runtime container | None |
| `LEADER_ELECTOR_IMAGE` | Image to use as the Leader Elector container | `gcr.io/google_containers/leader-elector: 0.5` |
| `LEADER_ELECTOR_IMAGE_PULL_POLICY` | ImagePullPolicy to use for the Leader Elector container | `IfNotPresent` |
| `LEADER_ELECTOR_REQUEST_CPU` | CPU Request for the Leader Elector container | None |
| `LEADER_ELECTOR_REQUEST_MEM` | Memory Request for the Leader Elector container | None |
| `LEADER_ELECTOR_LIMIT_CPU` | CPU Limit for the Leader Elector container | None |
| `LEADER_ELECTOR_LIMIT_MEM` | Memory Limit for the Leader Elector container | None |
| `TS_HOSTNAME_FROM_SERVICE` | If set to `true`, the Hostname of the Tailscale Proxy will be generated from the namespace and service name of the proxied service | `false` |
| `TS_HOSTNAME_FROM_SERVICE_SUFFIX` | An optional hostname suffix to add to automatically generated Hostnames. Only applies if `TS_HOSTNAME_FROM_SERVICE` is `true` | None |

## How it works

**On new LoadBalancer service:**
1. Look for LoadBalancer services with our loadbalancerclass
2. Look for nodes with the label `svc-lb.tailscale.iptables.sh/deploy=true`
3. Deploy a DaemonSet with the name: `ts-${SVC_NAME}` and our custom Docker image containing tailscaled.
4. Let the DaemonSet container run tailscaled, once IP is acquired, update tailscaled's secret with the Tailscale IP.
1. Look for LoadBalancer services with our loadBalancerClass (Default: `svc-lb.tailscale.iptables.sh/lb`)
2. Look for nodes with our nodeSelectorLabel (Default: `svc-lb.tailscale.iptables.sh/deploy`) with the value `true`
3. Deploy a Deployment or DaemonSet with the name: `${RESOURCE_PREFIX}${SVC_NAME}` and our custom Docker image containing tailscaled.
4. Let the Deployment or DaemonSet run tailscaled, once IP is acquired, update tailscaled's secret with the Tailscale IP.
5. Retrieve IP from secret/configmap, update LoadBalancer service with ingress IP (Tailscale IP)


Each `tailscale-svc-lb-runtime` DaemonSet runs the `leader-elector` sidecar to automatically elect a leader using the Kubernetes leader election system. `tailscaled` only works properly when ran on 1 pod at a time, hence this leader election system.
Each `tailscale-svc-lb-runtime` DaemonSet/Deployment runs the `leader-elector` sidecar to automatically elect a leader using the Kubernetes leader election system. `tailscaled` only works properly when ran on 1 pod at a time, hence this leader election system.

iptables DNAT is used to redirect incoming traffic to the service ClusterIP address, so `NET_ADMIN` capability is required & ipv4 forwarding.

**On LoadBalancer service deletion:**
1. Delete the DaemonSet
2. Delete the Secret/ConfigMap
3. Let Kubernetes delete the service

**Every 15 Seconds, after an initial 30 second idle time:**
1. Iterate all LoadBalancer services with our loadBalancerClass (Default: `svc-lb.tailscale.iptables.sh/lb`)
2. Reconcile the state of the relevant `${RESOURCE_PREFIX}${SVC_NAME` resources
3. If any resources are missing, create the Deployment/DaemonSet/Role/RoleBindings/ServiceAccount as necessary
6 changes: 0 additions & 6 deletions chart/Chart.yaml

This file was deleted.

67 changes: 0 additions & 67 deletions chart/templates/deployment.yaml

This file was deleted.

68 changes: 0 additions & 68 deletions chart/values.yaml

This file was deleted.

File renamed without changes.
13 changes: 13 additions & 0 deletions charts/tailscale-svc-lb/Chart.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,13 @@
apiVersion: v2
name: tailscale-svc-lb
description: klipper-lb but Tailscale
type: application
version: 1.0.0
appVersion: "1.0.0"
maintainers:
- name: Ben
email: [email protected]
url: https://github.com/bbetter173
- name: clrxbl
email: [email protected]
url: https://github.com/clrxbl
File renamed without changes.
Loading

0 comments on commit 3e47e9f

Please sign in to comment.