You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/solutions/ha-architecture.md
+2-2
Original file line number
Diff line number
Diff line change
@@ -1,8 +1,8 @@
1
-
# Architecture layout
1
+
# Architecture
2
2
3
3
As we discussed in the [overview of high availability](high-availability.md), the minimalist approach to a highly-available deployment is to have a three-node PostgreSQL cluster with the cluster management and failover mechanisms, load balancer and a backup / restore solution.
4
4
5
-
The following diagram shows this architecture.
5
+
The following diagram shows this architecture with the tools we recommend to use.
6
6
7
7

Copy file name to clipboardExpand all lines: docs/solutions/ha-measure.md
+6-1
Original file line number
Diff line number
Diff line change
@@ -1,6 +1,11 @@
1
1
# Measuring high availability
2
2
3
-
The need for high availability is determined by the business requirements, potential risks, and operational limitations. The level of high availability depends on how much downtime you can bear without negatively impacting your users and how much data loss you can tolerate during the system outage.
3
+
The need for high availability is determined by the business requirements, potential risks, and operational limitations (e.g. the more components you add to your infrastructure, the more complex and time-consuming it is to maintain).
4
+
5
+
The level of high availability depends on the following:
6
+
7
+
* how much downtime you can bear without negatively impacting your users and
8
+
* how much data loss you can tolerate during the system outage.
4
9
5
10
The measurement of availability is done by establishing a measurement time frame and dividing it by the time that it was available. This ratio will rarely be one, which is equal to 100% availability. At Percona, we don’t consider a solution to be highly available if it is not at least 99% or two nines available.
Copy file name to clipboardExpand all lines: docs/solutions/high-availability.md
+10-1
Original file line number
Diff line number
Diff line change
@@ -17,7 +17,7 @@ High availability is the ability of the system to operate continuously without t
17
17
18
18
### How to achieve it?
19
19
20
-
A short answer is: add redundancy to your deployment, eliminate a single point of failure and have the mechanism to transfer the services from a failed member to the healthy one.
20
+
A short answer is: add redundancy to your deployment, eliminate a single point of failure (SPOF) and have the mechanism to transfer the services from a failed member to the healthy one.
21
21
22
22
For a long answer, let's break it down into steps.
23
23
@@ -27,12 +27,16 @@ First, you should have more than one copy of your data. This means, you need to
27
27
28
28
You typically deploy these instances on separate servers or nodes. An example of such a deployment is the three-instance cluster consisting of one primary and two replica nodes. The replicas receive the data via the replication mechanism.
PostgreSQL natively supports logical and streaming replication. For high availability we recommend streaming replication as it happens in real time, minimizing the delay between the primary and replica nodes.
31
33
32
34
#### Step 2. Failover
33
35
34
36
Next, you may have a situation when a primary node is down or not responding. Reasons for that can be different – from hardware or network issues to software failures, power outages, and scheduled maintenance. In this case, you must have the way to know about it and to transfer the operation from the primary node to one of the secondaries. This process is called failover.
You can do a manual failover. It suits for environments where downtime does not impact operations or revenue. However, this requires dedicated personnel and may lead to additional downtime.
37
41
38
42
Another option is automated failover, which significantly minimizes downtime and is less error-prone than manual one. Automated failover can be accomplished by adding an open-source failover tool to your deployment.
@@ -41,12 +45,17 @@ Another option is automated failover, which significantly minimizes downtime and
41
45
42
46
Instead of a single node you now have a cluster. How to enable users to connect to the cluster and ensure they always connect to the correct node, especially when the primary node changes? One option is to configure a DNS resolution that resolves the IPs of all cluster nodes. A drawback here is that only the primary node accepts all requests. When your system grows, so does the load and it may lead to overloading the primary node and performance degradation.
Another option is to use a load-balancing proxy. Instead of connecting directly to the IP address of the primary node, which can change during a failover, you use a proxy that acts as a single point of entry for the entire cluster. This proxy knows which node is currently the primary and directs all incoming write requests to it. At the same time, it can distribute read requests among the replicas to evenly spread the load and improve performance.
45
51
52
+
46
53
#### Step 4. Backups
47
54
48
55
Even with replication and failover mechanisms in place, it’s crucial to have regular backups of your data. Backups provide a safety net for catastrophic failures that affect both the primary and replica nodes. While replication ensures data is synchronized across multiple nodes, it does not protect against data corruption, accidental deletions, or malicious attacks that can affect all nodes.
Having regular backups ensures that you can restore your data to a previous state, preserving data integrity and availability even in the worst-case scenarios. Store your backups in separate, secure locations and regularly test them to ensure that you can quickly and accurately restore them when needed. This additional layer of protection is essential to maintaining continuous operation and minimizing data loss.
51
60
52
61
As a result, you end up with the following components for a minimalistic highly-available deployment:
0 commit comments