Skip to content
This repository has been archived by the owner on Jul 1, 2023. It is now read-only.

Commit

Permalink
Update workshop with latest feedback (#41)
Browse files Browse the repository at this point in the history
* Updated Ubuntu to 18.04 everywhere

* Updated nginx version everywhere

* Changed `run` to use `--rm` where needed

* Cleaning and fixing typos

* Updated python and nginx broken version to avoid confusion

* Updated mattermost to latest version

* Added info about using $registry

* changed use of registry IP variable

* Fixed typos

* Changed docker build command to avoid cd-ing around
  • Loading branch information
Lele authored Nov 8, 2019
1 parent 90fcaa7 commit e97d648
Show file tree
Hide file tree
Showing 17 changed files with 63 additions and 66 deletions.
2 changes: 1 addition & 1 deletion crd/assets/nginx.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -12,4 +12,4 @@ kind: Nginx
metadata:
name: nginx-web
spec:
version: 1.9.1
version: 1.17.5
2 changes: 1 addition & 1 deletion crd/crd.md
Original file line number Diff line number Diff line change
Expand Up @@ -81,7 +81,7 @@ kind: Nginx
metadata:
name: mynginx
spec:
version: 1.9.1
version: 1.17.5
```
## Creating Custom Resource
Expand Down
61 changes: 28 additions & 33 deletions docker.md
Original file line number Diff line number Diff line change
Expand Up @@ -88,12 +88,6 @@ Stopped containers will remain available until cleaned. You can then removed sto
```bash
docker rm my_container_name_or_id
```

Stopped containers will remain available until cleaned. You can then removed stopped containers by using:
```bash
docker rm my_container_name_or_id
```

The argument used for the `rm` command can be the container ID or the container name.

If you prefer, it's possible to add the option `--rm` to the `run` subcommand so that the container will be cleaned automatically as soon as it stops its execution.
Expand All @@ -103,7 +97,7 @@ If you prefer, it's possible to add the option `--rm` to the `run` subcommand so
Let's see what environment variables are used by default:

```
$ docker run busybox env
$ docker run --rm busybox env
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
HOSTNAME=0a0169cdec9a
HOME=/root
Expand All @@ -114,7 +108,7 @@ The environment variables passed to the container may be different on other syst
When needed we can extend the environment by passing variable flags as `docker run` arguments:

```bash
$ docker run -e HELLO=world busybox env
$ docker run --rm -e HELLO=world busybox env
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
HOSTNAME=8ee8ba3443b6
HELLO=world
Expand All @@ -126,7 +120,7 @@ HOME=/root
Let's now take a look at process tree running in the container:

```bash
$ docker run busybox ps uax
$ docker run --rm busybox ps uax
```

My terminal prints out something similar to:
Expand All @@ -136,7 +130,7 @@ PID USER TIME COMMAND
1 root 0:00 ps uax
```

*NOTE:* Oh my! Am I running this command as root? Technically yes, although remember as we anticipated this is not the actual root of your host system but a very limited one running inside the container. We will get back to the topic of users and security a bit later.
*Oh my!* Am I running this command as root? Technically yes, although remember as we anticipated this is not the actual root of your host system but a very limited one running inside the container. We will get back to the topic of users and security a bit later.

In fact, as you can see, the process runs in a very limited and isolated environment where it cannot see or access all the other processes running on your machine.

Expand All @@ -145,14 +139,14 @@ In fact, as you can see, the process runs in a very limited and isolated environ
The filesystem used inside running containers is also isolated and separated from the one in the host:

```bash
$ docker run busybox ls -l /home
$ docker run --rm busybox ls -l /home
total 0
```

What if we want to expose one or more directories inside a container? To do so the option `-v/--volume` must be used as shown in the following example:

```
$ docker run -v $(pwd):/home busybox ls -l /home
$ docker run --rm -v $(pwd):/home busybox ls -l /home
total 72
-rw-rw-r-- 1 1000 1000 11315 Nov 23 19:42 LICENSE
-rw-rw-r-- 1 1000 1000 30605 Mar 22 23:19 README.md
Expand All @@ -174,7 +168,7 @@ In this configuration all changes done in the specified directory will be immedi
Networking in Docker containers is also isolated. Let's look at the interfaces inside a running container:

```bash
$ docker run busybox ifconfig
$ docker run --rm busybox ifconfig
eth0 Link encap:Ethernet HWaddr 02:42:AC:11:00:02
inet addr:172.17.0.2 Bcast:0.0.0.0 Mask:255.255.0.0
inet6 addr: fe80::42:acff:fe11:2/64 Scope:Link
Expand Down Expand Up @@ -207,7 +201,7 @@ We'll now translate that command in a Docker container, so that you won't need P
To forward port 5000 from the host system to port 5000 inside the container the `-p` flag should be added to the `run` command:

```bash
$ docker run -p 5000:5000 library/python:3 python -m http.server 5000
$ docker run --rm -p 5000:5000 library/python:3 python -m http.server 5000
```

This command remains alive and attached to the current session because the server will keep listening for requests.
Expand Down Expand Up @@ -263,7 +257,7 @@ You can find a lot of additional low level detail [here](http://crosbymichael.co
Our last python server example was inconvenient as it worked in foreground so it was bound to our shell. If we closed our shell the container would also die with it. In order to fix this problem let's change our command to:

```bash
$ docker run -d -p 5000:5000 --name=simple1 library/python:3.3 python -m http.server 5000
$ docker run --rm -d -p 5000:5000 --name=simple1 library/python:3 python -m http.server 5000
```

Flag `-d` instructs Docker to start the process in background. Let's see if our HTTP connection still works after we close our session:
Expand All @@ -280,7 +274,7 @@ It's still working and now we can see it running with the `ps` command:
```bash
docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
eea49c9314db library/python:3.3 "python -m http.serve" 3 seconds ago Up 2 seconds 0.0.0.0:5000->5000/tcp simple1
eea49c9314db library/python:3 "python -m http.serve" 3 seconds ago Up 2 seconds 0.0.0.0:5000->5000/tcp simple1
```

### Inspecting a running container
Expand Down Expand Up @@ -341,13 +335,13 @@ root 13 0.0 0.0 19188 2284 ? R+ 18:08 0:00 ps uax
To best illustrate the impact of `-i` or `--interactive` in the expanded version, consider this example:
```bash
$ echo "hello there" | docker run busybox grep hello
$ echo "hello there" | docker run --rm busybox grep hello
```
The example above won't work as the container's input is not attached to the host stdout. The `-i` flag fixes just that:
```bash
$ echo "hello there" | docker run -i busybox grep hello
$ echo "hello there" | docker run --rm -i busybox grep hello
hello there
```
Expand Down Expand Up @@ -418,10 +412,10 @@ Here's a quick explanation of the columns shown in that output:
### Running the image
Trying running our newly built image will result in an error similar to one of the following, depending on the Docker version:
Trying to run our newly built image will result in an error similar to one of the following, depending on the Docker version:
```bash
$ docker run hello /hello.sh
$ docker run --rm hello /hello.sh
write pipe: bad file descriptor
```
Expand Down Expand Up @@ -457,7 +451,7 @@ hello latest c8c3f1ea6ede
We can run our script now:
```bash
$ docker run hello /hello.sh
$ docker run --rm hello /hello.sh
hello, world!
```
Expand Down Expand Up @@ -486,7 +480,7 @@ hello latest 47060b048841
Execute the script using `image:tag` notation:
```bash
$ docker run hello:v2 /hello.sh
$ docker run --rm hello:v2 /hello.sh
hello, world v2!
```
Expand All @@ -503,17 +497,17 @@ ENTRYPOINT ["/hello.sh"]
$ docker build -t hello:v3 .
```
We should be now able to run the new image version without supply additional arguments:
We should now be able to run the new image version without supplying additional arguments:
```bash
$ docker run hello:v3
$ docker run --rm hello:v3
hello, world !
```
What happens if you pass an additional argument as in previous examples? They will be passed to the `ENTRYPOINT` command as arguments:
```bash
$ docker run hello:v3 woo
$ docker run --rm hello:v3 woo
hello, world woo!
```
Expand Down Expand Up @@ -553,13 +547,13 @@ Let's build and run:
```bash
cd docker/busybox-env
$ docker build -t hello:v4 .
$ docker run -e RUN1=Alice hello:v4
$ docker run --rm -e RUN1=Alice hello:v4
hello, Bob and Alice!
```
Though it's important to know that **variables specified at runtime takes precedence over those specified at build time**:
```bash
$ docker run -e BUILD1=Jon -e RUN1=Alice hello:v4
$ docker run --rm -e BUILD1=Jon -e RUN1=Alice hello:v4
hello, Jon and Alice!
```
Expand Down Expand Up @@ -621,7 +615,7 @@ Step 4 : ENTRYPOINT /script.sh
Removing intermediate container 50f057fd89cb
Successfully built db7c6f36cba1
$ docker run hello:v6
$ docker run --rm hello:v6
hello, hello!
```
Expand All @@ -636,7 +630,7 @@ They are only different by one letter, but this makes a difference:
```bash
$ docker build -t hello:v7 .
$ docker run hello:v7
$ docker run --rm hello:v7
Hello, hello!
```
Expand Down Expand Up @@ -677,7 +671,7 @@ The most frequently used command is `RUN` as it executes the command in a contai
Let's us use existing package managers to compose our images:
```Dockerfile
FROM ubuntu:14.04
FROM ubuntu:18.04
RUN apt-get update
RUN apt-get install -y curl
ENTRYPOINT curl
Expand All @@ -693,6 +687,7 @@ $ docker build -t myubuntu .
We can use our newly created ubuntu to curl pages:
```bash
$ # don't use `--rm` this time
$ docker run myubuntu https://google.com
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
Expand All @@ -710,10 +705,10 @@ However, it all comes at a price:
```bash
$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
myubuntu latest 50928f386c70 53 seconds ago 221.8 MB
myubuntu latest 50928f386c70 53 seconds ago 106 MB
```
That is 220MB for curl! As we know, there is no mandatory requirement to have images with all the OS inside.
That is 106MB for curl! As we know, there is no mandatory requirement to have images with all the OS inside.
If base on your use-case you still need it though, Docker will save you some space by re-using the base layer, so images with slightly different bases would not repeat each other.
### Operations with images
Expand Down Expand Up @@ -794,7 +789,7 @@ Images are distributed with a special service - `docker registry`.
Let us spin up a local registry:
```bash
$ docker run -p 5000:5000 --name registry -d registry:2
$ docker run --rm -p 5000:5000 --name registry -d registry:2
```
`docker push` is used to publish images to registries.
Expand Down
2 changes: 1 addition & 1 deletion docker/ubuntu/Dockerfile
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
FROM ubuntu:14.04
FROM ubuntu:18.04
RUN apt-get update
RUN apt-get install -y curl
ENTRYPOINT ["curl"]
12 changes: 6 additions & 6 deletions gravity101.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,23 +10,23 @@ For this training we’ll need:

* 1 machine for building installers. Can be any Linux (preferably Ubuntu 18.04 or recent CentOS) with installed Docker 18.06 or newer.

* 3 machines for deploying a cluster. Clean nodes, preferably Ubuntu 16.04 or recent CentOS.
* 3 machines for deploying a cluster. Clean nodes, preferably Ubuntu 18.04 or recent CentOS.

_Note: If you’re taking this training as a part of Gravitational training program, you will be provided with a pre-built environment._

## Building Cluster Image

### What is Gravity?

Gravity is a set of tools that let you the following things:
Gravity is a set of tools that let you achieve the following results:

* Package complex Kubernetes application(-s) as self-contained, deployable “images”.

* Use those images to provision multi-node hardened HA Kubernetes clusters from scratch on any fleet of servers, including fully air-gapped environments, with a single command (or a click).
* Use those images to provision multi-node hardened HA Kubernetes clusters from scratch on any fleet of servers in the cloud or on-prem, including fully air-gapped environments, with a single command (or a click).

* Perform cluster health monitoring and lifecycle management (such as scaling up/down), provide controlled, secure and audited access to the cluster nodes, automatically push application updates to many clusters and much more.

You can think of Gravity as an “image” management toolkit and draw an analogy with Docker: with Docker you build a “filesystem image” and use that image to spin up many containers, whereas Gravity allows you build a “cluster image” and spin up many Kubernetes cluster with it.
You can think of Gravity as an “image” management toolkit and draw an analogy with Docker: with Docker you build a “filesystem image” and use that image to spin up many containers, whereas Gravity allows you to build a “cluster image” and spin up many Kubernetes cluster with it.

Let’s take a look at how we build a cluster image.

Expand All @@ -40,11 +40,11 @@ Tele can be downloaded from the Downloads page.

_Note: If you were provided with a pre-build environment for the training, `tele` should already be present on the build machine._

Note that currently `tele` can build cluster images on Linux only, due to some quirks with Docker on macOS.
_Note: Currently `tele` can build cluster images on Linux only, due to some quirks with Docker on MacOS._

### Cluster Manifest

As a next step, we need to create a cluster manifest. Cluster manifest is a “Dockerfile” for your cluster image - it is used to describe basic cluster metadata, provide requirements for the cluster nodes, define various cluster lifecycle hooks and so on.
As a next step, we need to create a cluster manifest. Cluster manifest is the equivalent of a “Dockerfile” for your cluster image - it is used to describe basic cluster metadata, provide requirements for the cluster nodes, define various cluster lifecycle hooks and so on.

Our [documentation](https://gravitational.com/gravity/docs/pack/#application-manifest) provides a full list of parameters that the manifest lets you tweak, but for now let’s create the simplest possible manifest file. The manifest file is usually named `app.yaml`.

Expand Down
20 changes: 11 additions & 9 deletions k8s101.md
Original file line number Diff line number Diff line change
Expand Up @@ -339,7 +339,7 @@ spec:
run: my-nginx
spec:
containers:
- image: nginx:1.11.5
- image: nginx:1.17.5
name: my-nginx
ports:
- containerPort: 80
Expand Down Expand Up @@ -388,7 +388,7 @@ Events:
1m 1m 1 {deployment-controller } Normal ScalingReplicaSet Scaled down replica set my-nginx-3800858182 to 0
```

And now its version is `1.11.5`. Let's check out in the headers:
And now its version is `1.17.5`. Let's check out in the headers:

```bash
$ kubectl run -i -t --rm cli --image=appropriate/curl --restart=Never /bin/sh
Expand All @@ -403,7 +403,7 @@ curl -v http://my-nginx
> Accept: */*
>
< HTTP/1.1 200 OK
< Server: nginx/1.9.1
< Server: nginx/1.17.5
```
Let's simulate a situation when a deployment fails and we need to rollback. Our deployment has a typo:
Expand Down Expand Up @@ -431,7 +431,7 @@ spec:
run: my-nginx
spec:
containers:
- image: nginx:1.91 # <-- TYPO: version should be 1.9.1
- image: nginx:999 # <-- TYPO: non-existent version
name: my-nginx
ports:
- containerPort: 80
Expand Down Expand Up @@ -581,7 +581,7 @@ spec:
run: my-nginx
spec:
containers:
- image: nginx:1.9.1
- image: nginx:1.17.5
name: my-nginx
ports:
- containerPort: 80
Expand Down Expand Up @@ -651,10 +651,9 @@ Mattermost stack is composed of a worker process that connects to a running Post
Let's build a container image for our worker and push it to our local private registry:
```bash
$ cd mattermost/worker
$ export registry="$(kubectl get svc/registry -ojsonpath='{.spec.clusterIP}'):5000"
$ eval $(minikube docker-env)
$ docker build -t $registry/mattermost-worker:2.1.0 .
$ docker build -t $registry/mattermost-worker:5.16.3 mattermost/worker
$ docker push $registry/mattermost-worker
```
Expand Down Expand Up @@ -781,7 +780,7 @@ spec:
role: mattermost-worker
spec:
containers:
- image: mattermost-worker:2.1.0
- image: __REGISTRY_IP__/mattermost-worker:5.16.3
name: mattermost-worker
ports:
- containerPort: 80
Expand All @@ -795,8 +794,11 @@ spec:
name: mattermost-v1
```
The following command is just a fancy one-liner to insert the value of $registry
in your `kubectl` command and use it on the fly.
```bash
$ kubectl create -f mattermost/worker.yaml --record
$ cat mattermost/worker.yaml | sed "s/__REGISTRY_IP__/$registry/g" | kubectl create --record -f -
```
Let's check out the status of the deployment to double-check that part too:
Expand Down
Loading

0 comments on commit e97d648

Please sign in to comment.