Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add GUC controlling whether to pause recovery if some critical GUCs at replica have smaller value than on primary #9057

Open
wants to merge 4 commits into
base: main
Choose a base branch
from

Conversation

knizhnik
Copy link
Contributor

Problem

See #9023

Summary of changes

Ass GUC recovery_pause_on_misconfig allowing not to pause in case of replica and primary configuration mismatch

See neondatabase/postgres#501
See neondatabase/postgres#502
See neondatabase/postgres#503
See neondatabase/postgres#504

Checklist before requesting a review

  • I have performed a self-review of my code.
  • If it is a core feature, I have added thorough tests.
  • Do we need to implement analytics? if so did you add the relevant metrics to the dashboard?
  • If this PR requires public announcement, mark it with /release-notes label and add several sentences in this section.

Checklist before merging

  • Do not forget to reformat commit message to not include the above checklist

@knizhnik knizhnik requested a review from a team as a code owner September 19, 2024 06:58
Copy link

github-actions bot commented Sep 19, 2024

5373 tests run: 5148 passed, 3 failed, 222 skipped (full report)


Failures on Postgres 17

# Run all failed tests locally:
scripts/pytest -vv -n $(nproc) -k "test_postgres_version[release-pg17] or test_postgres_version[release-pg17] or test_postgres_version[debug-pg17]"
Flaky tests (1)

Postgres 17

Test coverage report is not available

The comment gets automatically updated with the latest test results
f3b1846 at 2024-11-08T13:57:58.084Z :recycle:

Copy link
Contributor

@MMeent MMeent left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

See: neondatabase/postgres#501

I don't think we should be exposing this GUC in the global namespace, but rather from the extension's namespace (so we can only enable this behavior when the extension is loaded)

As for !recoveryPauseOnMisconfig, this can cause us to see more active transactions on a standby than that standby expects.

What's the behaviour when we see >max_connections concurrently active transactions on a hot standby?

@knizhnik
Copy link
Contributor Author

See: neondatabase/postgres#501

I don't think we should be exposing this GUC in the global namespace, but rather from the extension's namespace (so we can only enable this behavior when the extension is loaded)

Sorry, I do not understand it. We are using this GUC in Postgres core code. Certainly we can register variable in Postgres code byut define GUC which changes it in neon extension. But this is IMHO strange solution. We can also add yet another hook (what to do on misconfig) and define it in Neon extension.

But I think that such "less strict" policy for checking primary/replica configuration will be useful not only to Neon.
So may we sometimes it is possible to propose this patch to community.

As for !recoveryPauseOnMisconfig, this can cause us to see more active transactions on a standby than that standby expects.

What's the behaviour when we see >max_connections concurrently active transactions on a hot standby?

I have added test for this case. Transactions are normally applied. It is expected behaviour: LR serialise transactions and only one transactions is applied at each moment of time.

In case of some other cases of misconfiguration (for example max_prepared_transations is smaller at primary than replica will crash with the fatal error:

PG:2024-09-23 06:29:50.661 GMT [38233] FATAL:  maximum number of prepared transactions reached
PG:2024-09-23 06:29:50.661 GMT [38233] HINT:  Increase max_prepared_transactions (currently 5).
PG:2024-09-23 06:29:50.661 GMT [38233] CONTEXT:  WAL redo at 0/154CAA8 for Transaction/PREPARE: gid t5: 2024-09-23 06:29:50.661593+00
PG:2024-09-23 06:29:50.662 GMT [38230] LOG:  startup process (PID 38233) exited with exit code 1
PG:2024-09-23 06:29:50.662 GMT [38230] LOG:  terminating any other active server processes
PG:2024-09-23 06:29:50.663 GMT [38230] LOG:  shutting down due to startup process failure
PG:2024-09-23 06:29:50.663 GMT [38230] LOG:  database system is shut down

after which control plane should restart replica with synced config parameters and so next time recon try should succeed.

@MMeent
Copy link
Contributor

MMeent commented Sep 23, 2024

What's the behaviour when we see >max_connections concurrently active transactions on a hot standby?

I have added test for this case. Transactions are normally applied. It is expected behaviour: LR serialise transactions and only one transactions is applied at each moment of time.

I don't care about LR, I care about normal replication, which does not serialize transactions. And in that case, we're probably writing visibility information into data structs sized to MaxBackends, while we're writing > MaxBackends values into those, which is probably very unsafe.

Did you check that visibility information is correctly applied even at large concurrency?

Note that a replica's transaction state handling mechanism is managed by 64 * (MaxBackends + max_prepared_xacts), so disarming this check is bound to create errors elsewhere during replay of transaction state.

E.g. spin up a replica with max_connections=5, a primary with max_connections=1000, and start a transaction in all those max_connections of the primary. If my math is correct, the WAL redo process on the replica will throw an error before even half of the connections' transaction IDs have been received because it ran out of space to put those transaction IDs. I had to check that we didn't silently write into shared structs, and am happy we don't, but it's really not good to remove protections and assume everything still works just fine because you did some light testing, because usually checks are in place to protect us against exactly those extreme cases.

@knizhnik
Copy link
Contributor Author

What's the behaviour when we see >max_connections concurrently active transactions on a hot standby?

I have added test for this case. Transactions are normally applied. It is expected behaviour: LR serialise transactions and only one transactions is applied at each moment of time.

I don't care about LR, I care about normal replication, which does not serialize transactions. And in that case, we're probably writing visibility information into data structs sized to MaxBackends, while we're writing > MaxBackends values into those, which is probably very unsafe.

Did you check that visibility information is correctly applied even at large concurrency?

Note that a replica's transaction state handling mechanism is managed by 64 * (MaxBackends + max_prepared_xacts), so disarming this check is bound to create errors elsewhere during replay of transaction state.

E.g. spin up a replica with max_connections=5, a primary with max_connections=1000, and start a transaction in all those max_connections of the primary. If my math is correct, the WAL redo process on the replica will throw an error before even half of the connections' transaction IDs have been received because it ran out of space to put those transaction IDs. I had to check that we didn't silently write into shared structs, and am happy we don't, but it's really not good to remove protections and assume everything still works just fine because you did some light testing, because usually checks are in place to protect us against exactly those extreme cases.

Sorry, many different tickets are missed in my head:(
Certainly I am not speaking about physical replication.
And my reply that LR serialise transaction is completely irrelevant.

But I failed to reproduce the problem with recovery failure with max_)connections at primary equal to 100 and at replica - just 5. I run 90 parallel transactions and they are normally replicated:

def test_physical_replication_config_mismatch(neon_simple_env: NeonEnv):
    env = neon_simple_env
    with env.endpoints.create_start(
        branch_name="main",
        endpoint_id="primary",
    ) as primary:
        with primary.connect() as p_con:
            with p_con.cursor() as p_cur:
                p_cur.execute(
                    "CREATE TABLE t(pk bigint primary key, payload text default repeat('?',200))"
                )
        time.sleep(1)

        with env.endpoints.new_replica_start(
            origin=primary,
            endpoint_id="secondary",
            config_lines=["max_connections=5"],
        ) as secondary:
            with secondary.connect() as s_con:
                with s_con.cursor() as s_cur:
                    cursors = []
                    for i in range(90):
                        p_con = primary.connect()
                        p_cur = p_con.cursor()
                        p_cur.execute("begin")
                        p_cur.execute("insert into t (pk) values (%s)", (i,))
                        cursors.append(p_cur)

                    for p_cur in cursors:
                        p_cur.execute("commit")

                    time.sleep(5)
                    s_cur.execute("select count(*) from t")
                    assert s_cur.fetchall()[0][0] == 90
                    s_cur.execute("show max_connections")
                    assert s_cur.fetchall()[0][0] == '5'

Any idea why it work?

@MMeent
Copy link
Contributor

MMeent commented Sep 23, 2024

But I failed to reproduce the problem with recovery failure with max_)connections at primary equal to 100 and at replica - just 5. I run 90 parallel transactions and they are normally replicated:

I said 1000, not 100.

The issue occurs at n_entries >= 64 * max_connections, so you'll have to consume 320 (multi)xids for that array to fill up at max_connections=5. I'd test with 1, but I'm not sure we can start with max_connections=1. If we can, you can use that as replica node setting instead.

@knizhnik
Copy link
Contributor Author

But I failed to reproduce the problem with recovery failure with max_)connections at primary equal to 100 and at replica - just 5. I run 90 parallel transactions and they are normally replicated:

I said 1000, not 100.

The issue occurs at n_entries >= 64 * max_connections, so you'll have to consume 320 (multi)xids for that array to fill up at max_connections=5. I'd test with 1, but I'm not sure we can start with max_connections=1. If we can, you can use that as replica node setting instead.

Sorry, can you explain the source of this formula: n_entries >= 64 * max_connections
I changed test to 400 connections at primary and 4 at replica and it still passed:

def test_physical_replication_config_mismatch(neon_simple_env: NeonEnv):
    env = neon_simple_env
    with env.endpoints.create_start(
        branch_name="main",
        endpoint_id="primary",
        config_lines=["max_connections=500"],
    ) as primary:
        with primary.connect() as p_con:
            with p_con.cursor() as p_cur:
                p_cur.execute(
                    "CREATE TABLE t(pk bigint primary key, payload text default repeat('?',200))"
                )
        time.sleep(1)

        with env.endpoints.new_replica_start(
            origin=primary,
            endpoint_id="secondary",
            config_lines=["max_connections=4"],
        ) as secondary:
            with secondary.connect() as s_con:
                with s_con.cursor() as s_cur:
                    cursors = []
                    for i in range(400):
                        p_con = primary.connect()
                        p_cur = p_con.cursor()
                        p_cur.execute("begin")
                        p_cur.execute("insert into t (pk) values (%s)", (i,))
                        cursors.append(p_cur)

                    time.sleep(5)

                    for p_cur in cursors:
                        p_cur.execute("commit")
                    time.sleep(2)

                    s_cur.execute("select count(*) from t")
                    assert s_cur.fetchall()[0][0] == 400

@knizhnik
Copy link
Contributor Author

With 900 connections at primary test also passed

@MMeent
Copy link
Contributor

MMeent commented Oct 16, 2024

OK, I've found a case where we hit the elog(ERROR) in KnownAssignedXidsAdd on the secondary.

Configuration:

Primary:

max_connections=1000

Secondary:

max_connections=2
autovacuum_max_workers=1
max_worker_processes=5
max_wal_senders=1
superuser_reserved_connections=0

Execute 650+ concurrently on the primary, e.g. with pgbench -c 990 -f script.sql:

BEGIN;
INSERT INTO test SELECT 1;
SELECT pg_sleep(10);
COMMIT;

You can adjust the secondary's max_connections upward if you increase the number of subxacts consumed by the benchmark transaction before the pg_sleep() operation.

@MMeent
Copy link
Contributor

MMeent commented Oct 17, 2024

@knizhnik can you verify my findings?

@knizhnik
Copy link
Contributor Author

@knizhnik can you verify my findings?

I have created test based on your scenario and reproduced FATAL: too many KnownAssignedXids
But is actually expected behaviour!
Yes, we know that smaller values of some critical parameters can cause recovery failure.
Probability of it is expected to be very small, but it can happen and cause replica restart.
As far as replica is always restarted from most recent LSN, it skips this transactions which cause this failure and most likely will be able to process.

So, I do not treat this error as a reason of rejecting this approach, do you?

@MMeent
Copy link
Contributor

MMeent commented Oct 17, 2024

As far as replica is always restarted from most recent LSN, it skips this transactions which cause this failure and most likely will be able to process.

I don't see it that way. If a user has a workload that causes this crash, then it's likely they will hit this again. And I don't like the idea of a primary that can consistently cause a secondary to crash.

@knizhnik
Copy link
Contributor Author

As far as replica is always restarted from most recent LSN, it skips this transactions which cause this failure and most likely will be able to process.

I don't see it that way. If a user has a workload that causes this crash, then it's likely they will hit this again. And I don't like the idea of a primary that can consistently cause a secondary to crash.

Well, somebody needs to make a decision.
I see very good reasons to support different configurations of primary and replica (different workloads, OLTP vs. OLAP,...).

Probability of such kind of problems is very very low. In your case we need to specify max_connections=1000 for primary and just 2 for replica. In rel life nobody never will setup such configuration. Moreover - we do not allow user to alter GUCs which are critical for replication (like max_connection, max_prepared_transactions,...). Values of some of this GUCs are now fixed and some of them depends on number of CU. And possible range of values for example for max_connections excludes such situations and crashes of replica.

Also, as far as I understand @hlinnaka is going to use his patch with CSN at replica which will completely eliminate this problem with known XIDs. Yes, that may not happen soon (still I hope that we will do it before patch will be committed in vanilla).

And last moment: if some customer is manager to spawn 1000 active transactions, then most likely he will be faced with many other problems (OOM, local disk space exhaustion, ...) much ore critical than problems with replication.

Copy link
Contributor

@hlinnaka hlinnaka left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's add tests for overrunning max_prepared_transactions and max_locks_per_transactions too. I.e. a test that creates many prepared transactions in primary, and a test that acquires a lot of AccessExclusiveLocks in the primary.



def test_physical_replication_config_mismatch(neon_simple_env: NeonEnv):
env = neon_simple_env
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

please add a test description

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's add tests for overrunning max_prepared_transactions and max_locks_per_transactions too. I.e. a test that creates many prepared transactions in primary, and a test that acquires a lot of AccessExclusiveLocks in the primary.

I have added test for max_prepared_transactions and too many known xids proposed by @MMeent.
On both cases walreveiver is stopped with fatal error.

I failed to cause replica crash because of locks exhaustion (but didn't change max_locks_per_transactions, just max_connections). Do you wan yet another negative test or two is enough?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants