-
Notifications
You must be signed in to change notification settings - Fork 48
dr: adds shadowing docs #1381
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
dr: adds shadowing docs #1381
Changes from all commits
e37c208
f215318
56429fc
2683e4e
8eb512f
91c4fd7
e36940c
1cc2d27
2440858
8786f88
a46c4c6
9e08972
ada54aa
501106b
285a4f8
1f7a58a
47f2164
cb6019b
2f4b6a1
49ef78f
66cbbf7
447a6cb
ec68d45
2cdb4b4
4e01c64
7ee511a
9348c16
ad8ddd5
98b25c5
ede2676
89ee641
a82a13f
4ea9205
1167c6c
3980368
9b97a74
8009a2b
1613786
54cbeb9
54011ae
0ed2dab
2936799
1df10dd
ebd52bd
f8d2c70
1efebd0
8d88936
2e822de
7924dab
aedbad9
82147e4
73500f0
16f8977
a965721
9725f69
cea5b42
321760d
49d5601
ae448a0
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,5 @@ | ||
| = Disaster Recovery | ||
| :description: Learn about Shadowing with cross-region replication for disaster recovery. | ||
| :env-linux: true | ||
| :page-layout: index | ||
| :page-categories: Management, High Availability, Disaster Recovery |
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,274 @@ | ||
| = Failover Runbook | ||
| :description: Step-by-step emergency guide for failing over Redpanda shadow links during disasters. | ||
| :page-aliases: deploy:redpanda/manual/resilience/shadowing-guide.adoc | ||
| :env-linux: true | ||
| :page-categories: Management, High Availability, Disaster Recovery, Emergency Response | ||
|
|
||
| include::shared:partial$enterprise-license.adoc[] | ||
|
|
||
| This guide provides step-by-step procedures for emergency failover when your primary Redpanda cluster becomes unavailable. Follow these procedures only during active disasters when immediate failover is required. | ||
|
|
||
| // TODO: All command output examples in this guide need verification by running actual commands in test environment | ||
|
|
||
| [IMPORTANT] | ||
| ==== | ||
| This is an emergency procedure. For planned failover testing or day-to-day shadow link management, see xref:./failover.adoc[]. Ensure you have completed the disaster readiness checklist in xref:./overview.adoc#disaster-readiness-checklist[] before an emergency occurs. | ||
| ==== | ||
|
|
||
| == Emergency failover procedure | ||
|
|
||
| Follow these steps during an active disaster: | ||
|
|
||
| 1. <<assess-situation,Assess the situation>> | ||
| 2. <<verify-shadow-status,Verify shadow cluster status>> | ||
| 3. <<document-state,Document current state>> | ||
| 4. <<initiate-failover,Initiate failover>> | ||
| 5. <<monitor-progress,Monitor failover progress>> | ||
| 6. <<update-applications,Update application configuration>> | ||
| 7. <<verify-functionality,Verify application functionality>> | ||
| 8. <<cleanup-stabilize,Clean up and stabilize>> | ||
|
|
||
paulohtb6 marked this conversation as resolved.
Show resolved
Hide resolved
|
||
| [[assess-situation]] | ||
| === Assess the situation | ||
|
|
||
| Confirm that failover is necessary: | ||
|
|
||
| [,bash] | ||
| ---- | ||
| # Check if the primary cluster is responding | ||
| rpk cluster info --brokers prod-cluster-1.example.com:9092,prod-cluster-2.example.com:9092 | ||
|
|
||
| # If primary cluster is down, check shadow cluster health | ||
| rpk cluster info --brokers shadow-cluster-1.example.com:9092,shadow-cluster-2.example.com:9092 | ||
| ---- | ||
|
|
||
| **Decision point**: If the primary cluster is responsive, consider whether failover is actually needed. Partial outages may not require full disaster recovery. | ||
paulohtb6 marked this conversation as resolved.
Show resolved
Hide resolved
|
||
|
|
||
| **Examples that require full failover:** | ||
|
|
||
| * Primary cluster is completely unreachable (network partition, regional outage) | ||
| * Multiple broker failures preventing writes to critical topics | ||
| * Data center failure affecting majority of brokers | ||
| * Persistent authentication or authorization failures across the cluster | ||
|
|
||
| **Examples that may NOT require failover:** | ||
|
|
||
| * Single broker failure with sufficient replicas remaining | ||
| * Temporary network connectivity issues affecting some clients | ||
| * High latency or performance degradation (but cluster still functional) | ||
| * Non-critical topic or partition unavailability | ||
|
|
||
| [[verify-shadow-status]] | ||
| === Verify shadow cluster status | ||
|
|
||
| Check the health of your shadow links: | ||
|
|
||
| [,bash] | ||
| ---- | ||
| # List all shadow links | ||
| rpk shadow list | ||
|
|
||
| # Check the configuration of your shadow link | ||
| rpk shadow describe <shadow-link-name> | ||
|
|
||
| # Check the status of your disaster recovery link | ||
| rpk shadow status <shadow-link-name> | ||
| ---- | ||
|
|
||
| Verify that the following conditions exist before proceeding with failover: | ||
|
|
||
| * Shadow link state should be `ACTIVE`. | ||
| * Topics should be in `ACTIVE` state (not `FAULTED`). | ||
| * Replication lag should be reasonable for your RPO requirements. | ||
|
|
||
| **Understanding replication lag:** | ||
|
|
||
| Use `rpk shadow status <shadow-link-name>` to check lag, which shows the message count difference between source and shadow partitions: | ||
|
|
||
| * **Acceptable lag examples**: 0-1000 messages for low-throughput topics, 0-10000 messages for high-throughput topics | ||
| * **Concerning lag examples**: Growing lag over 50,000 messages, or lag that continuously increases without recovering | ||
| * **Critical lag examples**: Lag exceeding your data loss tolerance (for example, if you can only afford to lose 1 minute of data, lag should represent less than 1 minute of typical message volume) | ||
|
|
||
| [[document-state]] | ||
| === Document current state | ||
|
|
||
| Record the current lag and status before proceeding: | ||
|
|
||
| [,bash] | ||
| ---- | ||
| # Capture current status for post-mortem analysis | ||
| rpk shadow status <shadow-link-name> > failover-status-$(date +%Y%m%d-%H%M%S).log | ||
| ---- | ||
|
|
||
| // TODO: Verify this output format by running actual rpk shadow status command | ||
| Example output showing healthy replication before failover: | ||
| ---- | ||
| Shadow Link: <shadow-link-name> | ||
|
|
||
| Overview: | ||
| NAME <shadow-link-name> | ||
| UID <uid> | ||
| STATE ACTIVE | ||
|
|
||
| Tasks: | ||
| Name Broker_ID State Reason | ||
| <task-name> 1 ACTIVE | ||
| <task-name> 2 ACTIVE | ||
|
|
||
| Topics: | ||
| Name: <topic-name>, State: ACTIVE | ||
|
|
||
| Partition SRC_LSO SRC_HWM DST_HWM Lag | ||
| 0 1234 1468 1456 12 | ||
| 1 2345 2579 2568 11 | ||
| ---- | ||
|
|
||
| IMPORTANT: Note the replication lag to estimate potential data loss during failover. | ||
|
|
||
| [[initiate-failover]] | ||
| === Initiate failover | ||
|
|
||
| A complete cluster failover is appropriate If you observe that the source cluster is no longer reachable: | ||
|
|
||
| [,bash] | ||
| ---- | ||
| # Fail over all topics in the shadow link | ||
paulohtb6 marked this conversation as resolved.
Show resolved
Hide resolved
|
||
| rpk shadow failover <shadow-link-name> --all | ||
| ---- | ||
|
|
||
| For selective topic failover (when only specific services are affected): | ||
|
|
||
| [,bash] | ||
| ---- | ||
| # Fail over individual topics | ||
| rpk shadow failover <shadow-link-name> --topic <topic-name> | ||
| rpk shadow failover <shadow-link-name> --topic <topic-name> | ||
| ---- | ||
|
|
||
| [[monitor-progress]] | ||
| === Monitor failover progress | ||
|
|
||
| Track the failover process: | ||
|
|
||
| [,bash] | ||
| ---- | ||
| # Monitor status until all topics show FAILED_OVER | ||
| watch -n 5 "rpk shadow status <shadow-link-name>" | ||
|
|
||
| # Check detailed topic status and lag during emergency | ||
| rpk shadow status <shadow-link-name> --print-topic | ||
| ---- | ||
|
|
||
| // TODO: Verify this output format by running actual rpk shadow status command during failover | ||
| Example output during successful failover: | ||
| ---- | ||
| Shadow Link: <shadow-link-name> | ||
|
|
||
| Overview: | ||
| NAME <shadow-link-name> | ||
| UID <uid> | ||
| STATE ACTIVE | ||
|
|
||
| Tasks: | ||
| Name Broker_ID State Reason | ||
| <task-name> 1 ACTIVE | ||
| <task-name> 2 ACTIVE | ||
|
|
||
| Topics: | ||
| Name: <topic-name>, State: FAILED_OVER | ||
| Name: <topic-name>, State: FAILED_OVER | ||
| Name: <topic-name>, State: FAILING_OVER | ||
| ---- | ||
paulohtb6 marked this conversation as resolved.
Show resolved
Hide resolved
|
||
|
|
||
| **Wait for**: All critical topics to reach `FAILED_OVER` state before proceeding. | ||
|
|
||
| [[update-applications]] | ||
| === Update application configuration | ||
|
|
||
| Redirect your applications to the shadow cluster by updating connection strings in your applications to point to shadow cluster brokers. If using DNS-based service discovery, update DNS records accordingly. Restart applications to pick up new connection settings and verify connectivity from application hosts to shadow cluster. | ||
|
|
||
| [[verify-functionality]] | ||
| === Verify application functionality | ||
|
|
||
| Test critical application workflows: | ||
|
|
||
| [,bash] | ||
| ---- | ||
| # Verify applications can produce messages | ||
| rpk topic produce <topic-name> --brokers <shadow-cluster-address>:9092 | ||
|
|
||
| # Verify applications can consume messages | ||
| rpk topic consume <topic-name> --brokers <shadow-cluster-address>:9092 --num 1 | ||
| ---- | ||
|
|
||
| Test message production and consumption, consumer group functionality, and critical business workflows to ensure everything is working properly. | ||
|
|
||
| [[cleanup-stabilize]] | ||
| === Clean up and stabilize | ||
|
|
||
| After all applications are running normally: | ||
|
|
||
| [,bash] | ||
| ---- | ||
| # Optional: Delete the shadow link (no longer needed) | ||
| rpk shadow delete <shadow-link-name> | ||
| ---- | ||
|
|
||
| Document the time of failover initiation and completion, applications affected and recovery times, data loss estimates based on replication lag, and issues encountered during failover. | ||
|
|
||
| == Troubleshoot common issues | ||
|
|
||
| === Topics stuck in FAILING_OVER state | ||
|
|
||
| **Problem**: Topics remain in `FAILING_OVER` state for extended periods | ||
|
|
||
| **Solution**: Check shadow cluster logs for specific error messages and ensure sufficient cluster resources (CPU, memory, disk space) are available on the shadow cluster. Verify network connectivity between shadow cluster nodes and confirm that all shadow topic partitions have elected leaders and the controller partition is properly replicated with an active leader. | ||
|
|
||
| If topics remain stuck after addressing these cluster health issues and you need immediate failover, you can force delete the shadow link to failover all topics: | ||
|
|
||
| [,bash] | ||
| ---- | ||
| # Force delete the shadow link to failover all topics | ||
| rpk shadow delete <shadow-link-name> --force | ||
| ---- | ||
|
|
||
| [WARNING] | ||
| ==== | ||
| Force deleting a shadow link immediately fails over all topics in the link. This action is irreversible and should only be used when topics are stuck and you need immediate access to all replicated data. | ||
| ==== | ||
|
|
||
| === Topics in FAULTED state | ||
|
|
||
| **Problem**: Topics show `FAULTED` state and are not replicating | ||
|
|
||
| **Solution**: Check for authentication issues, network connectivity problems, or source cluster unavailability. Verify that the shadow link service account still has the required permissions on the source cluster. Review shadow cluster logs for specific error messages about the faulted topics. | ||
|
|
||
| === Application connection failures | ||
|
|
||
| **Problem**: Applications cannot connect to shadow cluster after failover | ||
|
|
||
| **Solution**: Verify shadow cluster broker endpoints are correct and check security group and firewall rules. Confirm authentication credentials are valid for the shadow cluster and test network connectivity from application hosts. | ||
|
|
||
| === Consumer group offset issues | ||
|
|
||
| **Problem**: Consumers start from beginning or wrong positions | ||
|
|
||
| **Solution**: Verify consumer group offsets were replicated (check your filters) and use `rpk group describe <group-name>` to check offset positions. If necessary, manually reset offsets to appropriate positions. See link:https://support.redpanda.com/hc/en-us/articles/23499121317399-How-to-manage-consumer-group-offsets-in-Redpanda[How to manage consumer group offsets in Redpanda^] for detailed reset procedures. | ||
|
|
||
| == Next steps | ||
|
|
||
| After successful failover, focus on recovery planning and process improvement. Begin by assessing the source cluster failure and determining whether to restore the original cluster or permanently promote the shadow cluster as your new primary. | ||
|
|
||
| **Immediate recovery planning:** | ||
|
|
||
| 1. **Assess source cluster**: Determine root cause of the outage | ||
| 2. **Plan recovery**: Decide whether to restore source cluster or promote shadow cluster permanently | ||
| 3. **Data synchronization**: Plan how to synchronize any data produced during failover | ||
| 4. **Fail forward**: Create a new shadow link with the failed over shadow cluster as source to maintain a DR cluster | ||
|
|
||
| **Process improvement:** | ||
|
|
||
|
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. In general, here (and in other places in these files) there should be some introductory sentence before a list. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. After identifying the cause and resolving the cluster failure, resume your regular disaster recovery planning tasks, which should include: |
||
| 1. **Document the incident**: Record timeline, impact, and lessons learned | ||
| 2. **Update runbooks**: Improve procedures based on what you learned | ||
| 3. **Test regularly**: Schedule regular disaster recovery drills | ||
| 4. **Review monitoring**: Ensure monitoring caught the issue appropriately | ||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This overview link isn't rendering. Best to not use relative links