Skip to content
This repository has been archived by the owner on Nov 19, 2024. It is now read-only.

Commit

Permalink
Chore(1.13.2): Add litmus docs version 1.13.2 (#452)
Browse files Browse the repository at this point in the history
* Chore(1.13.2): Add litmus docs version 1.13.2

Signed-off-by: udit <[email protected]>
  • Loading branch information
uditgaurav authored Mar 15, 2021
1 parent 452d7cd commit 8979850
Show file tree
Hide file tree
Showing 34 changed files with 262 additions and 97 deletions.
7 changes: 7 additions & 0 deletions docs/ec2-terminate.md
Original file line number Diff line number Diff line change
Expand Up @@ -114,6 +114,9 @@ rules:
- apiGroups: ["litmuschaos.io"]
resources: ["chaosengines","chaosexperiments","chaosresults"]
verbs: ["create","list","get","patch","update"]
- apiGroups: [""]
resources: ["nodes"]
verbs: ["patch","get","list"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
Expand Down Expand Up @@ -214,6 +217,10 @@ spec:
# provide the region name of the instace
- name: REGION
value: ''
# enable it if the target instance is a part of self-managed nodegroup.
- name: MANAGED_NODEGROUP
value: 'disable'
```

### Create the ChaosEngine Resource
Expand Down
3 changes: 3 additions & 0 deletions docs/pod-autoscaler.md
Original file line number Diff line number Diff line change
Expand Up @@ -159,6 +159,9 @@ subjects:

</table>

**NOTE:** Provide the label of resource object (deployment/statefulset) in the appinfo section of chaosegnine while running this experiment
and _NOT_ the pod label. You can check the resource object label using `kubectl get <resource-type> --show-labels` (where `resource-type` can be deploy,sts).

#### Sample ChaosEngine Manifest

[embedmd]:# (https://raw.githubusercontent.com/litmuschaos/chaos-charts/master/charts/generic/pod-autoscaler/engine.yaml yaml)
Expand Down
35 changes: 28 additions & 7 deletions website/versioned_docs/version-1.13.0/admin-mode.md
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@ Provide this ServiceAccount in ChaosEngine's .spec.chaosServiceAccount.
- Select Chaos Experiment from [hub.litmuschaos.io](https://hub.litmuschaos.io/) and click on `INSTALL EXPERIMENT` button.

```bash
kubectl apply -f https://hub.litmuschaos.io/api/chaos/1.13.0?file=charts/generic/pod-delete/experiment.yaml -n litmus
kubectl apply -f https://hub.litmuschaos.io/api/chaos/1.13.2?file=charts/generic/pod-delete/experiment.yaml -n litmus
```

#### Prepare RBAC Manifest
Expand All @@ -49,15 +49,36 @@ metadata:
labels:
name: litmus-admin
rules:
- apiGroups: ["","apps","batch","extensions","litmuschaos.io"]
resources: ["pods","pods/exec","pods/eviction","jobs","daemonsets","events","chaosresults","chaosengines"]
- apiGroups: [""]
resources: ["pods","events","configmaps","secrets","services"]
verbs: ["create","delete","get","list","patch","update", "deletecollection"]
- apiGroups: ["","apps","litmuschaos.io","apps.openshift.io","argoproj.io"]
resources: ["configmaps","secrets","services","chaosexperiments","pods/log","replicasets","deployments","statefulsets","deploymentconfigs","rollouts","services"]
verbs: ["get","list","patch","update"]
- apiGroups: [""]
resources: ["pods/exec","pods/log","pods/eviction","replicationcontrollers"]
verbs: ["get","list","create"]
- apiGroups: ["batch"]
resources: ["jobs"]
verbs: ["create","list","get","delete","deletecollection"]
- apiGroups: ["apps"]
resources: ["deployments","statefulsets"]
verbs: ["list","get","patch","update"]
- apiGroups: ["apps"]
resources: ["replicasets"]
verbs: ["list","get"]
- apiGroups: ["apps"]
resources: ["daemonsets"]
verbs: ["list","get","delete"]
- apiGroups: ["apps.openshift.io"]
resources: ["deploymentconfigs"]
verbs: ["list","get"]
- apiGroups: ["argoproj.io"]
resources: ["rollouts"]
verbs: ["list","get"]
- apiGroups: ["litmuschaos.io"]
resources: ["chaosengines","chaosexperiments","chaosresults"]
verbs: ["create","list","get","patch","update","delete"]
- apiGroups: [""]
resources: ["nodes"]
verbs: ["get","list","patch","update"]
verbs: ["patch","get","list","update"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
Expand Down
7 changes: 5 additions & 2 deletions website/versioned_docs/version-1.13.0/cassandra-pod-delete.md
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@ original_id: cassandra-pod-delete
## Prerequisites

- Ensure that the Litmus Chaos Operator is running by executing `kubectl get pods` in operator namespace (typically, `litmus`).If not, install from [here](https://docs.litmuschaos.io/docs/getstarted/#install-litmus)
- Ensure that the `cassandra-pod-delete` experiment resource is available in the cluster by executing `kubectl get chaosexperiments` in the desired namespace. If not, install from [here](https://hub.litmuschaos.io/api/chaos/1.13.0?file=charts/cassandra/cassandra-pod-delete/experiment.yaml)
- Ensure that the `cassandra-pod-delete` experiment resource is available in the cluster by executing `kubectl get chaosexperiments` in the desired namespace. If not, install from [here](https://hub.litmuschaos.io/api/chaos/1.13.2?file=charts/cassandra/cassandra-pod-delete/experiment.yaml)

## Entry Criteria

Expand Down Expand Up @@ -81,8 +81,11 @@ metadata:
app.kubernetes.io/part-of: litmus
rules:
- apiGroups: [""]
resources: ["pods","pods/exec","pods/log","events","services"]
resources: ["pods","events","services"]
verbs: ["create","list","get","patch","update","delete","deletecollection"]
- apiGroups: [""]
resources: ["pods/exec","pods/log"]
verbs: ["create","list","get"]
- apiGroups: ["batch"]
resources: ["jobs"]
verbs: ["create","list","get","delete","deletecollection"]
Expand Down
6 changes: 3 additions & 3 deletions website/versioned_docs/version-1.13.0/chaos-workflows.md
Original file line number Diff line number Diff line change
Expand Up @@ -67,7 +67,7 @@ namespaces. Ensure that you have the right permission to be able to create the s
- Apply the LitmusChaos Operator manifest:

```
kubectl apply -f https://litmuschaos.github.io/litmus/litmus-operator-v1.13.0.yaml
kubectl apply -f https://litmuschaos.github.io/litmus/litmus-operator-v1.13.2.yaml
```

- Install the litmus-admin service account to be used by the chaos-operator while executing the experiment (this example
Expand All @@ -82,12 +82,12 @@ namespaces. Ensure that you have the right permission to be able to create the s
- Install the pod-delete chaos experiment

```
kubectl apply -f https://hub.litmuschaos.io/api/chaos/1.13.0?file=charts/generic/pod-delete/experiment.yaml -n litmus
kubectl apply -f https://hub.litmuschaos.io/api/chaos/1.13.2?file=charts/generic/pod-delete/experiment.yaml -n litmus
```

- **Note**: If you are interested in using chaostoolkit to perform the pod-delete, instead of the native litmus lib, you can apply
this [rbac](https://github.com/litmuschaos/chaos-charts/tree/master/charts/generic/k8-pod-delete/Cluster/rbac.yaml)
& [experiment](https://hub.litmuschaos.io/api/chaos/1.13.0?file=charts/generic/k8-pod-delete/experiment.yaml) manifests instead
& [experiment](https://hub.litmuschaos.io/api/chaos/1.13.2?file=charts/generic/k8-pod-delete/experiment.yaml) manifests instead
of the ones described above.

- Create the service account and associated RBAC, which will be used by the Argo workflow controller to execute the
Expand Down
9 changes: 6 additions & 3 deletions website/versioned_docs/version-1.13.0/container-kill.md
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@ original_id: container-kill
## Prerequisites

- Ensure that the Litmus ChaosOperator is running by executing `kubectl get pods` in operator namespace (typically, `litmus`). If not, install from [here](https://docs.litmuschaos.io/docs/getstarted/#install-litmus)
- Ensure that the `container-kill` experiment resource is available in the cluster by executing `kubectl get chaosexperiments` in the desired namespace. If not, install from [here](https://hub.litmuschaos.io/api/chaos/1.13.0?file=charts/generic/container-kill/experiment.yaml)
- Ensure that the `container-kill` experiment resource is available in the cluster by executing `kubectl get chaosexperiments` in the desired namespace. If not, install from [here](https://hub.litmuschaos.io/api/chaos/1.13.2?file=charts/generic/container-kill/experiment.yaml)

## Entry Criteria

Expand Down Expand Up @@ -89,8 +89,11 @@ metadata:
app.kubernetes.io/part-of: litmus
rules:
- apiGroups: [""]
resources: ["pods","pods/exec","pods/log","events","replicationcontrollers"]
resources: ["pods","events"]
verbs: ["create","list","get","patch","update","delete","deletecollection"]
- apiGroups: [""]
resources: ["pods/exec","pods/log","replicationcontrollers"]
verbs: ["list","get","create"]
- apiGroups: ["batch"]
resources: ["jobs"]
verbs: ["create","list","get","delete","deletecollection"]
Expand Down Expand Up @@ -179,7 +182,7 @@ subjects:
<td> LIB_IMAGE </td>
<td> LIB Image used to kill the container </td>
<td> Optional </td>
<td> Defaults to `litmuschaos/go-runner:1.13.0`</td>
<td> Defaults to `litmuschaos/go-runner:1.13.2`</td>
</tr>
<tr>
<td> LIB </td>
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ original_id: coredns-pod-delete

## Prerequisites
- Ensure that the Litmus ChaosOperator is running by executing `kubectl get pods` in operator namespace (typically, `litmus`). If not, install from [here](https://docs.litmuschaos.io/docs/getstarted/#install-litmus)
- Ensure that the `coredns-pod-delete` experiment resource is available in the cluster by executing `kubectl get chaosexperiments` in the desired namespace. If not, install from [here](https://hub.litmuschaos.io/api/chaos/1.13.0?file=charts/coredns/coredns-pod-delete/experiment.yaml)
- Ensure that the `coredns-pod-delete` experiment resource is available in the cluster by executing `kubectl get chaosexperiments` in the desired namespace. If not, install from [here](https://hub.litmuschaos.io/api/chaos/1.13.2?file=charts/coredns/coredns-pod-delete/experiment.yaml)

## Entry Criteria

Expand Down
15 changes: 12 additions & 3 deletions website/versioned_docs/version-1.13.0/disk-fill.md
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@ original_id: disk-fill

- Ensure that Kubernetes Version > 1.13
- Ensure that the Litmus Chaos Operator is running by executing `kubectl get pods` in operator namespace (typically, `litmus`). If not, install from [here](https://docs.litmuschaos.io/docs/getstarted/#install-litmus)
- Ensure that the `disk-fill` experiment resource is available in the cluster by executing `kubectl get chaosexperiments` in the desired namespace If not, install from [here](https://hub.litmuschaos.io/api/chaos/1.13.0?file=charts/generic/disk-fill/experiment.yaml)
- Ensure that the `disk-fill` experiment resource is available in the cluster by executing `kubectl get chaosexperiments` in the desired namespace If not, install from [here](https://hub.litmuschaos.io/api/chaos/1.13.2?file=charts/generic/disk-fill/experiment.yaml)
- Cluster must run docker container runtime
- Appropriate Ephemeral Storage Requests and Limits should be set for the application before running the experiment.
An example specification is shown below:
Expand Down Expand Up @@ -110,8 +110,11 @@ metadata:
app.kubernetes.io/part-of: litmus
rules:
- apiGroups: [""]
resources: ["pods","pods/exec","pods/log","events","replicationcontrollers"]
resources: ["pods","events"]
verbs: ["create","list","get","patch","update","delete","deletecollection"]
- apiGroups: [""]
resources: ["pods/exec","pods/log","replicationcontrollers"]
verbs: ["list","get","create"]
- apiGroups: ["batch"]
resources: ["jobs"]
verbs: ["create","list","get","delete","deletecollection"]
Expand Down Expand Up @@ -209,7 +212,7 @@ subjects:
<td> LIB_IMAGE </td>
<td> The image used to fill the disk </td>
<td> Optional </td>
<td> Defaults to `litmuschaos/go-runner:1.13.0` </td>
<td> Defaults to `litmuschaos/go-runner:1.13.2` </td>
</tr>
<tr>
<td> RAMP_TIME </td>
Expand All @@ -223,6 +226,12 @@ subjects:
<td> Optional </td>
<td> Default value: parallel. Supported: serial, parallel </td>
</tr>
<tr>
<td> EPHEMERAL_STORAGE_MEBIBYTES </td>
<td> Ephemeral storage which need to fill (unit: MiBi)</td>
<td> Optional </td>
<td></td>
</tr>
<tr>
<td> INSTANCE_ID </td>
<td> A user-defined string that holds metadata/info about current run/instance of chaos. Ex: 04-05-2020-9-00. This string is appended as suffix in the chaosresult CR name.</td>
Expand Down
17 changes: 13 additions & 4 deletions website/versioned_docs/version-1.13.0/ebs-loss.md
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@ original_id: ebs-loss

- Ensure that Kubernetes Version > 1.13
- Ensure that the Litmus Chaos Operator is running by executing `kubectl get pods` in operator namespace (typically, `litmus`). If not, install from [here](https://docs.litmuschaos.io/docs/getstarted/#install-litmus)
- Ensure that the `ebs-loss` experiment resource is available in the cluster by executing `kubectl get chaosexperiments` in the desired namespace If not, install from [here](https://hub.litmuschaos.io/api/chaos/1.13.0?file=charts/kube-aws/ebs-loss/experiment.yaml)
- Ensure that the `ebs-loss` experiment resource is available in the cluster by executing `kubectl get chaosexperiments` in the desired namespace If not, install from [here](https://hub.litmuschaos.io/api/chaos/1.13.2?file=charts/kube-aws/ebs-loss/experiment.yaml)
- Ensure that you have sufficient AWS access to attach or detach an ebs volume from the instance.
- Ensure to create a Kubernetes secret having the AWS access configuration(key) in the `CHAOS_NAMESPACE`. A sample secret file looks like:

Expand Down Expand Up @@ -99,9 +99,18 @@ metadata:
name: ebs-loss-sa
app.kubernetes.io/part-of: litmus
rules:
- apiGroups: ["","litmuschaos.io","batch"]
resources: ["pods","jobs","secrets","events","pods/log","pods/exec","chaosengines","chaosexperiments","chaosresults"]
verbs: ["create","list","get","patch","update","delete"]
- apiGroups: [""]
resources: ["pods","events","secrets"]
verbs: ["create","list","get","patch","update","delete","deletecollection"]
- apiGroups: [""]
resources: ["pods/exec","pods/log"]
verbs: ["create","list","get"]
- apiGroups: ["batch"]
resources: ["jobs"]
verbs: ["create","list","get","delete","deletecollection"]
- apiGroups: ["litmuschaos.io"]
resources: ["chaosengines","chaosexperiments","chaosresults"]
verbs: ["create","list","get","patch","update"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
Expand Down
40 changes: 34 additions & 6 deletions website/versioned_docs/version-1.13.0/ec2-terminate.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,11 +21,16 @@ original_id: ec2-terminate
</tr>
</table>

### WARNING
```
If the target EC2 instance is a part of a self-managed nodegroup:
Make sure to drain the target node if any application is running on it and also ensure to cordon the target node before running the experiment so that the experiment pods do not schedule on it.
```
## Prerequisites

- Ensure that Kubernetes Version > 1.13
- Ensure that the Litmus Chaos Operator is running by executing `kubectl get pods` in operator namespace (typically, `litmus`). If not, install from [here](https://docs.litmuschaos.io/docs/getstarted/#install-litmus)
- Ensure that the `ec2-terminate` experiment resource is available in the cluster by executing `kubectl get chaosexperiments` in the desired namespace If not, install from [here](https://hub.litmuschaos.io/api/chaos/1.13.0?file=charts/kube-aws/ec2-terminate/experiment.yaml)
- Ensure that the `ec2-terminate` experiment resource is available in the cluster by executing `kubectl get chaosexperiments` in the desired namespace If not, install from [here](https://hub.litmuschaos.io/api/chaos/1.13.2?file=charts/kube-aws/ec2-terminate/experiment.yaml)
- Ensure that you have sufficient AWS access to stop and start an ec2 instance.
- Ensure to create a Kubernetes secret having the AWS access configuration(key) in the `CHAOS_NAMESPACE`. A sample secret file looks like:

Expand Down Expand Up @@ -59,6 +64,7 @@ ENV value on `experiment.yaml`with the same name.

- Causes termination of an EC2 instance before bringing it back to running state after the specified chaos duration.
- It helps to check the performance of the application/process running on the ec2 instance.
- When the `MANAGED_NODEGROUP` is enable then the experiment will not try to start the instance post chaos instead it will check of the addition of the new node instance to the cluster.

## Integrations

Expand All @@ -77,7 +83,7 @@ ENV value on `experiment.yaml`with the same name.

#### Sample Rbac Manifest

[embedmd]:# (https://raw.githubusercontent.com/litmuschaos/chaos-charts/v1.13.x/charts/kube-aws/ec2-terminate/rbac.yaml yaml)
[embedmd]:# (https://raw.githubusercontent.com/litmuschaos/chaos-charts/1.13.2/charts/kube-aws/ec2-terminate/rbac.yaml yaml)
```yaml
---
apiVersion: v1
Expand All @@ -97,9 +103,21 @@ metadata:
name: ec2-terminate-sa
app.kubernetes.io/part-of: litmus
rules:
- apiGroups: ["","litmuschaos.io","batch"]
resources: ["pods","jobs","secrets","events","pods/log","pods/exec","chaosengines","chaosexperiments","chaosresults"]
verbs: ["create","list","get","patch","update","delete"]
- apiGroups: [""]
resources: ["pods","events","secrets"]
verbs: ["create","list","get","patch","update","delete","deletecollection"]
- apiGroups: [""]
resources: ["pods/exec","pods/log"]
verbs: ["create","list","get"]
- apiGroups: ["batch"]
resources: ["jobs"]
verbs: ["create","list","get","delete","deletecollection"]
- apiGroups: ["litmuschaos.io"]
resources: ["chaosengines","chaosexperiments","chaosresults"]
verbs: ["create","list","get","patch","update"]
- apiGroups: [""]
resources: ["nodes"]
verbs: ["patch","get","list"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
Expand Down Expand Up @@ -146,6 +164,12 @@ subjects:
<td> Optional </td>
<td> Defaults to 60s </td>
</tr>
<tr>
<td> MANAGED_NODEGROUP </td>
<td> Set to <code>enable</code> if the target instance is the part of self-managed nodegroups </td>
<td> Optional </td>
<td> Defaults to <code>disable</code> </td>
</tr>
<tr>
<td> REGION </td>
<td> The region name of the target instace</td>
Expand All @@ -163,7 +187,7 @@ subjects:

#### Sample ChaosEngine Manifest

[embedmd]:# (https://raw.githubusercontent.com/litmuschaos/chaos-charts/v1.13.x/charts/kube-aws/ec2-terminate/engine.yaml yaml)
[embedmd]:# (https://raw.githubusercontent.com/litmuschaos/chaos-charts/1.13.2/charts/kube-aws/ec2-terminate/engine.yaml yaml)
```yaml
apiVersion: litmuschaos.io/v1alpha1
kind: ChaosEngine
Expand Down Expand Up @@ -194,6 +218,10 @@ spec:
# provide the region name of the instace
- name: REGION
value: ''
# enable it if the target instance is a part of self-managed nodegroup.
- name: MANAGED_NODEGROUP
value: 'disable'
```

### Create the ChaosEngine Resource
Expand Down
11 changes: 7 additions & 4 deletions website/versioned_docs/version-1.13.0/getstarted.md
Original file line number Diff line number Diff line change
Expand Up @@ -37,7 +37,7 @@ Running chaos on your application involves the following steps:
Apply the LitmusChaos Operator manifest:

```
kubectl apply -f https://litmuschaos.github.io/litmus/litmus-operator-v1.13.0.yaml
kubectl apply -f https://litmuschaos.github.io/litmus/litmus-operator-v1.13.2.yaml
```

The above command installs all the CRDs, required service account configuration, and chaos-operator.
Expand Down Expand Up @@ -141,7 +141,7 @@ The generic chaos experiments such as `pod-delete`, `container-kill`,` pod-netw
This is the first chart you are recommended to install.

```
kubectl apply -f https://hub.litmuschaos.io/api/chaos/1.13.0?file=charts/generic/experiments.yaml -n nginx
kubectl apply -f https://hub.litmuschaos.io/api/chaos/1.13.2?file=charts/generic/experiments.yaml -n nginx
```

Verify if the chaos experiments are installed.
Expand Down Expand Up @@ -181,8 +181,11 @@ metadata:
name: pod-delete-sa
rules:
- apiGroups: [""]
resources: ["pods","pods/exec","pods/log","events","replicationcontrollers"]
resources: ["pods","events"]
verbs: ["create","list","get","patch","update","delete","deletecollection"]
- apiGroups: [""]
resources: ["pods/exec","pods/log","replicationcontrollers"]
verbs: ["create","list","get"]
- apiGroups: ["batch"]
resources: ["jobs"]
verbs: ["create","list","get","delete","deletecollection"]
Expand Down Expand Up @@ -329,7 +332,7 @@ kubectl delete chaosengine --all -n <namespace>
```

```console
kubectl delete -f https://litmuschaos.github.io/litmus/litmus-operator-v1.13.0.yaml
kubectl delete -f https://litmuschaos.github.io/litmus/litmus-operator-v1.13.2.yaml
```

**NOTE**
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,7 @@ original_id: kafka-broker-disk-failure

Zookeeper uses this to construct a path in which kafka cluster data is stored.

- Ensure that the kafka-broker-disk failure experiment resource is available in the cluster by executing `kubectl get chaosexperiments` in the desired namespace. If not, install from [here](https://hub.litmuschaos.io/api/chaos/1.13.0?file=charts/kafka/kafka-broker-disk-failure/experiment.yaml)
- Ensure that the kafka-broker-disk failure experiment resource is available in the cluster by executing `kubectl get chaosexperiments` in the desired namespace. If not, install from [here](https://hub.litmuschaos.io/api/chaos/1.13.2?file=charts/kafka/kafka-broker-disk-failure/experiment.yaml)

- Create a secret with the gcloud serviceaccount key (placed in a file `cloud_config.yml`) named `kafka-broker-disk-failure` in the namespace where the experiment CRs are created. This is necessary to perform the disk-detach steps from the litmus experiment container.

Expand Down
Loading

0 comments on commit 8979850

Please sign in to comment.