diff --git a/docs/features/physical.md b/docs/features/physical.md
index abb518b0..3db9e640 100644
--- a/docs/features/physical.md
+++ b/docs/features/physical.md
@@ -12,9 +12,25 @@
| [2.3.0](../release-notes/2.3.0.md) | Physical backups in mixed deployments |
| [2.10.0](../release-notes/2.10.0.md) | Physical restore with a fallback directory |
+**Physical backup** is copying of physical files from the Percona Server for MongoDB `dbPath` data directory to the remote backup storage. These files include data files, journal, index files, etc. Percona Backup for MongoDB also copies the WiredTiger storage options to the backup's metadata.
+
+**Physical restore** is the reverse process: `pbm-agents` shut down the `mongod` nodes, clean up the `dbPath` data directory and copy the physical files from the storage to it.
+
+The following diagram shows the physical restore flow:
+
+
+
+During the restore, the ``pbm-agents`` temporarily start the ``mongod`` nodes using the WiredTiger storage options retrieved from the backup's metadata. The logs for these starts are saved to the ``pbm.restore.log`` file inside the ``dbPath``. Upon successful restore, this file is deleted. However, it remains for debugging if the restore were to fail.
+
+During physical backups and restores, ``pbm-agents`` don't export / import data from / to the database. This significantly reduces the backup / restore time compared to logical ones and is the recommended backup method for big (multi-terabyte) databases.
+
+| Advantages | Disadvantages |
+| ------------------------------ | ------------------------------- |
+|- Faster backup and restore speed
- Recommended for big, multi-terabyte datasets
- No database overhead | - The backup size is bigger than for logical backups due to data fragmentation extra cost of keeping data and indexes in appropriate data structures
- Extra manual operations are required after the restore
- Point-in-time recovery requires manual operations | Sharded clusters and non-sharded replica sets |
+
## Availability and system requirements
-* Percona Server for MongoDB starting from versions 4.2.15-16, 4.4.6-8, 5.0 and higher.
+* Percona Server for MongoDB starting from versions 4.2.15-16, 4.4.6-8, 5.0 and higher.
* WiredTiger is used as the storage engine in Percona Server for MongoDB, since physical backups heavily rely on the WiredTiger [`$backupCursor` :octicons-link-external-16:](https://docs.percona.com/percona-server-for-mongodb/6.0/backup-cursor.html) functionality.
!!! warning
@@ -28,22 +44,6 @@
* [Physical Backup Support in Percona Backup for MongoDB :octicons-link-external-16:](https://www.percona.com/blog/physical-backup-support-in-percona-backup-for-mongodb/)
* [$backupCursorExtend in Percona Server for MongoDB :octicons-link-external-16:](https://www.percona.com/blog/2021/06/07/experimental-feature-backupcursorextend-in-percona-server-for-mongodb/)
-Physical backup is copying of physical files from the Percona Server for MongoDB `dbPath` data directory to the remote backup storage. These files include data files, journal, index files, etc. Starting with version 2.0.0, Percona Backup for MongoDB also copies the WiredTiger storage options to the backup's metadata.
-
-Physical restore is the reverse process: `pbm-agents` shut down the `mongod` nodes, clean up the `dbPath` data directory and copy the physical files from the storage to it.
-
-The following diagram shows the physical restore flow:
-
-
-
-During the restore, the ``pbm-agents`` temporarily start the ``mongod`` nodes using the WiredTiger storage options retrieved from the backup's metadata. The logs for these starts are saved to the ``pbm.restore.log`` file inside the ``dbPath``. Upon successful restore, this file is deleted. However, it remains for debugging if the restore were to fail.
-
-During physical backups and restores, ``pbm-agents`` don't export / import data from / to the database. This significantly reduces the backup / restore time compared to logical ones and is the recommended backup method for big (multi-terabyte) databases.
-
-| Advantages | Disadvantages |
-| ------------------------------ | ------------------------------- |
-|- Faster backup and restore speed
- Recommended for big, multi-terabyte datasets
- No database overhead | - The backup size is bigger than for logical backups due to data fragmentation extra cost of keeping data and indexes in appropriate data structures
- Extra manual operations are required after the restore
- Point-in-time recovery requires manual operations | Sharded clusters and non-sharded replica sets |
-
[Make a backup](../usage/backup-physical.md){ .md-button }
[Restore a backup](../usage/restore-physical.md){ .md-button }
diff --git a/docs/troubleshoot/faq.md b/docs/troubleshoot/faq.md
index 76568e2a..71d9d7fc 100644
--- a/docs/troubleshoot/faq.md
+++ b/docs/troubleshoot/faq.md
@@ -36,7 +36,34 @@ Yes. The preconditions for both Point-in-Time Recovery restore and regular resto
2. Make sure no writes are made to the database during restore. This ensures data consistency.
-3. Disable Point-in-Time Recovery if it is enabled. This is because oplog slicing and restore are exclusive operations and cannot be run together. Note that oplog slices made after the restore and before the next backup snapshot become invalid. Make a fresh backup and re-enable Point-in-Time Recovery.
+## Why did my physical backup fail with Location50917 or Location50915 errors?
+
+Both `Location50917` and `Location50915` errors happen when Percona Backup for MongoDB attempts to open a `$backupCursor` during WiredTiger checkpoint operations.
+
+**Location50917** error occurs when opening a backup cursor conflicts with an active checkpoint operation
+**Location50915** is a similar timing conflict related to checkpoint operations during backup cursor initialization
+
+These errors typically happen in environments with:
+
+* High write workloads that trigger frequent checkpoint operations
+* Multiple concurrent backup operations
+* Database nodes under heavy load
+* Active checkpoint operations during backup initialization
+
+The errors are transient and typically resolve themselves once the checkpoint operation completes. Starting with version 2.13.0, Percona Backup for MongoDB automatically retries opening the backup cursor when encountering these errors.
+
+PBM automatically retries the backup operation up to 10 times with linear backoff. In most cases, the backup will succeed on a subsequent retry attempt without
+requiring manual intervention.
+
+If your backup continues to fail after automatic retries, do the following:
+
+1. Check the `pbm logs` output for detailed error information and retry attempts
+2. Verify that your MongoDB nodes have sufficient resources (CPU, memory, disk I/O)
+3. Ensure there are no ongoing conflicts or resource contention
+4. Consider scheduling backups during periods of lower database activity if the issue persists
+5. Review checkpoint frequency settings if checkpoints are occurring excessively
+
+
## Can I install PBM on MacBook?
diff --git a/docs/usage/restore-physical.md b/docs/usage/restore-physical.md
index ae2d1ea7..f90c5137 100644
--- a/docs/usage/restore-physical.md
+++ b/docs/usage/restore-physical.md
@@ -51,9 +51,13 @@
During the physical restore, `pbm-agent` processes stop the `mongod` nodes, clean up the data directory and copy the data from the storage onto every node. During this process, the database is restarted several times.
- You can [track the restore progress](restore-progress.md) using the `pbm describe-restore` command. Don't run any other commands since they may interrupt the restore flow and cause the issues with the database.
+3. [Track the restore progress](restore-progress.md) using the `pbm describe-restore` command. Don't run any other commands since they may interrupt the restore flow and cause the issues with the database.
- A restore has the `Done` status when it succeeded on all nodes. If it failed on some nodes, it has the `partlyDone` status but you can still start the cluster. The failed nodes will receive the data via the initial sync. For either status, proceed with the [post-restore steps](#post-restore-steps). Learn more about partially done restores in the [Partially done physical restores](../troubleshoot/restore-partial.md) chapter.
+A restore has the `Done` status when it succeeded on all nodes.
+
+If it failed on some nodes, it has the `partlyDone` status but you can still start the cluster. The failed nodes will receive the data via the initial sync. Learn more about partially done restores in the [Partially done physical restores](../troubleshoot/restore-partial.md) chapter.
+
+For either status, proceed with the [post-restore steps](#post-restore-steps).
### Post-restore steps