Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 2 additions & 1 deletion docs/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -232,7 +232,8 @@
"https://matrix.to/*",
"https://portal.azure.com/*",
"https://dev.mysql.com/*",
"https://www.mysql.com/*"
"https://www.mysql.com/*",
"https://www.terraform.io/*"
]

# A regex list of URLs where anchors are ignored by 'make linkcheck'
Expand Down
2 changes: 1 addition & 1 deletion docs/explanation/juju.md
Original file line number Diff line number Diff line change
Expand Up @@ -62,5 +62,5 @@ ERROR Charm feature requirements cannot be met:
- charm requires feature "juju" (version >= 3.1.5) but model currently supports version 3.1.4
```

You must then [upgrade to the required Juju version](/how-to/upgrade/upgrade-juju) before proceeding with the charm upgrade.
You must then [upgrade to the required Juju version](/how-to/refresh/upgrade-juju) before proceeding with the charm upgrade.

2 changes: 1 addition & 1 deletion docs/explanation/security/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -63,7 +63,7 @@ Charmed MySQL K8s and Charmed MySQL Router K8s run on top of the same rock (OCI-

[Charmed MySQL K8s operator](https://github.com/canonical/mysql-k8s-operator) and [Charmed MySQL Router K8s operator](https://github.com/canonical/mysql-router-k8s-operator) install pinned versions of the rock to provide reproducible and secure environments. New versions (revisions) of charmed operators can be released to update the operator's code, workloads, or both. It is important to refresh the charm regularly to make sure the workload is as secure as possible.

For more information on upgrading Charmed MySQL K8s, see the [How to upgrade MySQL](/how-to/upgrade/index) and [How to upgrade MySQL Router](https://charmhub.io/mysql-router-k8s/docs/h-upgrade) guides, as well as the [Releases](/reference/releases).
For more information on upgrading Charmed MySQL K8s, see the [How to upgrade MySQL](/how-to/refresh/index) and [How to upgrade MySQL Router](https://charmhub.io/mysql-router-k8s/docs/h-upgrade) guides, as well as the [Releases](/reference/releases).

### Encryption

Expand Down
Original file line number Diff line number Diff line change
@@ -1,8 +1,8 @@
# Clients

## Pre-requisits
## Pre-requisites

Make sure both `Rome` and `Lisbon` Clusters are deployed using the [Async Deployment manual](/how-to/cross-regional-async-replication/deploy)!
Make sure both `rome` and `lisbon` clusters are deployed according to [](/how-to/cluster-cluster-replication/deploy).

## Offer and consume database endpoints

Expand Down
Original file line number Diff line number Diff line change
@@ -1,16 +1,7 @@
# Deploy async replication

The following table shows the source and target controller/model combinations that are currently supported:

| | AWS | GCP | Azure |
|---|---|:---:|:---:|
| AWS | ![ check ] | | |
| GCP | | ![ check ] | |
| Azure | | | ![ check ] |

## Deploy
# Deploy

Deploy two MySQL Clusters, named `Rome` and `Lisbon`:

```shell
juju add-model rome # 1st cluster location: Rome
juju add-model lisbon # 2nd cluster location: Lisbon
Expand All @@ -22,8 +13,8 @@ juju switch lisbon
juju deploy mysql-k8s db2 --trust --channel=8.0/edge --config profile=testing --config cluster-name=lisbon --base [email protected]
```

```{note}
Remove profile configuration for production deployments. For more information, see our documentation about [Profiles](https://charmhub.io/mysql-k8s/docs/r-profiles).
```{caution}
Remove [profile](/reference/profiles) configuration for production deployments.
```

## Offer
Expand Down
30 changes: 30 additions & 0 deletions docs/how-to/cluster-cluster-replication/index.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,30 @@
# Cluster-cluster replication

Cluster-cluster asynchronous replication focuses on disaster recovery by distributing data across different servers.

For increased safety, it is recommended to deploy each cluster in a different geographical region.

## Substrate dependencies

The following table shows the source and target controller/model combinations that are currently supported:

| | AWS | GCP | Azure |
|-------|------------|:----------:|:----------:|
| AWS | ![ check ] | | |
| GCP | | ![ check ] | |
| Azure | | | ![ check ] |

## Guides

```{toctree}
:titlesonly:
:maxdepth: 2

Deploy <deploy>
Clients <clients>
Switchover/failover <switchover-failover>
Recovery <recovery>
Removal <removal>
```

[check]: https://img.shields.io/badge/%E2%9C%93-brightgreen
Original file line number Diff line number Diff line change
Expand Up @@ -2,19 +2,19 @@

## Pre-requisites

Make sure both `Rome` and `Lisbon` clusters are deployed following the [async deployment guide](/how-to/cross-regional-async-replication/deploy).
Make sure both `Rome` and `Lisbon` clusters are deployed following the [cluster-cluster deployment guide](/how-to/cluster-cluster-replication/deploy).

## Recover detached cluster

If the relation between clusters was removed and one side went into detached/blocked state: simply relate async replication back to restore ClusterSet.
If the relation between clusters was removed and one side went into detached/blocked state: simply relate cluster-cluster replication back to restore ClusterSet.

## Recover lost cluster

If a cluster has been lost and the ClusterSet need new member: deploy new db application and init async replication. The data will be copied automatically and the new cluster will join ClusterSet.
If a cluster has been lost and the ClusterSet need new member: deploy new db application and init cluster-cluster replication. The data will be copied automatically and the new cluster will join ClusterSet.

## Recover invalidated cluster

A cluster in the cluster-set gets invalidated when async replication auto-recovery fails on a disconnection event or when a failover is run against another cluster-set member while this cluster is unreachable.
A cluster in the cluster-set gets invalidated when cluster-cluster replication auto-recovery fails on a disconnection event or when a failover is run against another cluster-set member while this cluster is unreachable.

If the invalidated cluster connections is restored, it's status will be displayed in `juju status` as:

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -2,12 +2,12 @@

## Pre-requisites

Make sure both `Rome` and `Lisbon` clusters are deployed following the [async deployment guide](/how-to/cross-regional-async-replication/deploy).
Make sure both `Rome` and `Lisbon` clusters are deployed following the [cluster-cluster deployment guide](/how-to/cluster-cluster-replication/deploy).

## Detach Cluster from ClusterSet

```{important}
It is important to [switchover](/how-to/cross-regional-async-replication/switchover-failover) the `Primary` cluster before detaching it from ClusterSet!
It is important to [switchover](/how-to/cluster-cluster-replication/switchover-failover) the `Primary` cluster before detaching it from ClusterSet!
```

Assuming the `Lisbon` is a current `Primary` and we want to detach `Rome` (for removal or reuse):
Expand All @@ -24,7 +24,7 @@ From this points, there are three options, as described in the following section

## Rejoin detached cluster into previous ClusterSet

At this stage, the detached/blocked cluster `Rome` can re-join the previous ClusterSet by restoring async integration/relation:
At this stage, the detached/blocked cluster `Rome` can re-join the previous ClusterSet by restoring cluster-cluster integration/relation:

```shell
juju switch rome
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@

## Pre-requisites

Make sure both `Rome` and `Lisbon` Clusters are deployed using the [Async Deployment manual](/how-to/cross-regional-async-replication/deploy)!
Make sure both `Rome` and `Lisbon` Clusters are deployed using the [Async Deployment manual](/how-to/cluster-cluster-replication/deploy)!

## Switchover (safe)

Expand Down
12 changes: 0 additions & 12 deletions docs/how-to/cross-regional-async-replication/index.md

This file was deleted.

2 changes: 1 addition & 1 deletion docs/how-to/deploy/multi-az.md
Original file line number Diff line number Diff line change
Expand Up @@ -256,7 +256,7 @@ mydatabase/2 active idle 10.80.1.6

At this point we can relax and enjoy the protection from Cloud Availability zones!

To survive acomplete cloud outage, we recommend setting up [cluster-cluster asynchronous replication](/how-to/cross-regional-async-replication/deploy).
To survive acomplete cloud outage, we recommend setting up [cluster-cluster asynchronous replication](/how-to/cluster-cluster-replication/deploy).


## Remove GKE setup
Expand Down
8 changes: 4 additions & 4 deletions docs/how-to/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -47,22 +47,22 @@ Integrate with observability services like Grafana, Prometheus, and Tempo.
Monitoring (COS) <monitoring-cos/index>
```

## Upgrades
## Refresh (upgrade)

```{toctree}
:titlesonly:
:maxdepth: 2

Upgrade <upgrade/index>
Refresh (upgrade) <refresh/index>
```

## Cross-regional (cluster-cluster) async replication
## Cluster-cluster replication

```{toctree}
:titlesonly:
:maxdepth: 2

Cross-regional async replication <cross-regional-async-replication/index>
Cluster-cluster replication <cluster-cluster-replication/index>
```

## Development
Expand Down
2 changes: 1 addition & 1 deletion docs/how-to/monitoring-cos/enable-tracing.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ This is feature is in development. It is **not recommended** for production envi
## Prerequisites

* Charmed MySQL K8s revision 146 or higher
* See [](/how-to/upgrade/index)
* See [](/how-to/refresh/index)
* `cos-lite` bundle deployed in a Kubernetes environment
* See the [COS Microk8s tutorial](https://charmhub.io/topics/canonical-observability-stack/tutorials/install-microk8s)

Expand Down
4 changes: 2 additions & 2 deletions docs/how-to/primary-switchover.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,6 +15,6 @@ In this example, the unit `mysql/1` will become the new primary. The previous pr
secondary.

```{caution}
The `promote-to-primary` action can be used in cluster scope, when using async replication.
Check [Switchover / Failover](cross-regional-async-replication/switchover-failover) for more information.
The `promote-to-primary` action can be used in cluster scope, when using cluster-cluster replication.
Check [Switchover / Failover](cluster-cluster-replication/switchover-failover) for more information.
```
88 changes: 88 additions & 0 deletions docs/how-to/refresh/index.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,88 @@
# Refresh (upgrade)

This charm supports in-place upgrades to higher versions via Juju's [`refresh`](https://documentation.ubuntu.com/juju/3.6/reference/juju-cli/list-of-juju-cli-commands/refresh/#details) command.

## Supported refreshes

```{eval-rst}
+------------+------------+----------+------------+
| From | To |
+------------+------------+----------+------------+
| Charm | MySQL | Charm | MySQL |
| revision | Version | revision | Version |
+============+============+==========+============+
| 254, 255 | ``8.0.41`` | | |
+------------+------------+----------+------------+
| 240, 241 | ``8.0.41`` | 254, 255 | ``8.0.41`` |
+------------+------------+----------+------------+
| 210, 211 | ``8.0.39`` | 254, 255 | ``8.0.41`` |
| | +----------+------------+
| | | 240, 241 | ``8.0.41`` |
+------------+------------+----------+------------+
| 180, 181 | ``8.0.37`` | 254, 255 | ``8.0.41`` |
| | +----------+------------+
| | | 240, 241 | ``8.0.41`` |
| | +----------+------------+
| | | 210, 211 | ``8.0.39`` |
+------------+------------+----------+------------+
| 153 | ``8.0.36`` | 254, 255 | ``8.0.41`` |
| | +----------+------------+
| | | 240, 241 | ``8.0.41`` |
| | +----------+------------+
| | | 210, 211 | ``8.0.39`` |
| | +----------+------------+
| | | 180, 181 | ``8.0.37`` |
+------------+------------+----------+------------+
| 127 | ``8.0.35`` | None | |
+------------+------------+----------+------------+
| 113 | ``8.0.34`` | 127 | ``8.0.35`` |
+------------+------------+----------+------------+
| 99 | ``8.0.34`` | 127 | ``8.0.35`` |
| | +----------+------------+
| | | 113 | ``8.0.35`` |
+------------+------------+----------+------------+
| 75 | ``8.0.32`` | 127 | ``8.0.35`` |
| | +----------+------------+
| | | 113 | ``8.0.35`` |
| | +----------+------------+
| | | 99 | ``8.0.34`` |
+------------+------------+----------+------------+

```

Due to an upstream issue with MySQL Server version `8.0.35`, Charmed MySQL versions below [Revision 127](https://github.com/canonical/mysql-k8s-operator/releases/tag/rev127) **cannot** be upgraded using Juju's `refresh`.

To upgrade from older versions to Revision 153 or higher, the data must be migrated manually. See: [](/how-to/development/migrate-data-via-backup-restore).

### Juju version upgrade

Before refreshing the charm, make sure to check the [](/reference/releases) page to see if there any requirements for the new revision, such as a Juju version upgrade.

* [](/how-to/refresh/upgrade-juju)

## Refresh guides

To refresh a **single cluster**, see:

* [](/how-to/refresh/single-cluster/refresh-single-cluster)
* [](/how-to/refresh/single-cluster/roll-back-single-cluster)

To refresh a **multi-cluster** deployment, see

* [](/how-to/refresh/multi-cluster/refresh-multi-cluster)
* [](/how-to/refresh/multi-cluster/roll-back-multi-cluster)

```{toctree}
:titlesonly:
:maxdepth: 2
:hidden:

Single cluster <single-cluster/index>
Multi-cluster <multi-cluster/index>
Upgrade Juju <upgrade-juju>
```

<!--Links-->

[cross]: https://img.icons8.com/?size=16&id=CKkTANal1fTY&format=png&color=D00303
[check]: https://img.icons8.com/color/20/checkmark--v1.png
8 changes: 8 additions & 0 deletions docs/how-to/refresh/multi-cluster/index.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
# Refresh a multi-cluster deployment

```{toctree}
:titlesonly:

Refresh <refresh-multi-cluster>
Roll back <roll-back-multi-cluster>
```
32 changes: 32 additions & 0 deletions docs/how-to/refresh/multi-cluster/refresh-multi-cluster.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,32 @@
# How to refresh a multi-cluster deployment

A MySQL multi-cluster deployment (also known as a cluster set) can be upgraded by performing a refresh of each cluster individually.

This guide goes over the steps and important considerations before refreshing multiple MySQL clusters.

## Determine cluster order

To upgrade a multi-cluster deployment, each cluster must be refreshed one by one - starting with the standby clusters.

**The primary cluster must be the last one to get refreshed.**

This ensures that availability is not affected if there are any issues with the upgrade. Refreshing all the standbys first also minimizes the cost of the leader re-election process.

To identify the primary cluster, run

```shell
juju run mysql-k8s/<n> get-cluster-status cluster-set=true
```

## Refresh each cluster

For each cluster, follow the instructions in [](/how-to/refresh/single-cluster/refresh-single-cluster).

**Perform a health check before proceeding to the next cluster.**

Use the [`get-cluster-status`](https://charmhub.io/mysql-k8s/actions#get-cluster-status) Juju action to check that everything is healthy after refreshing a cluster.

## Roll back

If something goes wrong, roll back the cluster. See: [](/how-to/refresh/single-cluster/roll-back-single-cluster)

7 changes: 7 additions & 0 deletions docs/how-to/refresh/multi-cluster/roll-back-multi-cluster.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
# Roll back a multi-cluster deployment

A multi-cluster rollback is the same as a single-cluster rollback, but repeated for each cluster that was fully or partially upgraded.

```{include} ../single-cluster/roll-back-single-cluster.md
:start-after: "How to roll back a single cluster"
```
8 changes: 8 additions & 0 deletions docs/how-to/refresh/single-cluster/index.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
# Refresh a single cluster

```{toctree}
:titlesonly:

Refresh <refresh-single-cluster>
Roll back <roll-back-single-cluster>
```
Loading