forked from postgres/postgres
-
Notifications
You must be signed in to change notification settings - Fork 0
merge code from postgres/postgres #1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
Merged
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Noted while studying subplan hash issue.
Commit 356687b omitted to remove leftover code for destroying a hashed subplan's hash tables, with the result that the tables were always rebuilt not reused; this leads to severe memory leakage if a hashed subplan is re-executed enough times. Moreover, the code for reusing the hashnulls table had a typo that would have made it do the wrong thing if it were reached. Looking at the code coverage report shows severe under-coverage of the potential callers of ResetTupleHashTable, so add some test cases that exercise them. Andreas Karlsson and Tom Lane, per reports from Ranier Vilela and Justin Pryzby. Backpatch to v11, as the faulty commit was. Discussion: https://postgr.es/m/[email protected] Discussion: https://postgr.es/m/CAEudQAo=DCebm1RXtig9OH+QivpS97sMkikt0A9qHmMUs+g6ZA@mail.gmail.com Discussion: https://postgr.es/m/[email protected]
Add a new bt_metap() column to display the metapage's allequalimage field. Also add three new columns to contrib/pageinspect's bt_page_items() function: * Add a boolean column ("dead") that displays the LP_DEAD bit value for each non-pivot tuple. * Add a TID column ("htid") that displays a single heap TID value for each tuple. This is the TID that is returned by BTreeTupleGetHeapTID(), so comparable values are shown for pivot tuples, plain non-pivot tuples, and posting list tuples. * Add a TID array column ("tids") that displays TIDs from each tuple's posting list, if any. This works just like the "tids" column from pageinspect's gin_leafpage_items() function. No version bump for the pageinspect extension, since there hasn't been a stable Postgres release since the last version bump (the last bump was part of commit 58b4cb3). Author: Peter Geoghegan Discussion: https://postgr.es/m/CAH2-WzmSMmU2eNvY9+a4MNP+z02h6sa-uxZvN3un6jY02ZVBSw@mail.gmail.com
Oversight in commit 93ee38e.
When deciding on the local rscale to use for the Taylor series expansion, ln_var() neglected to account for the fact that the result is subsequently multiplied by a factor of 2^(nsqrt+1), where nsqrt is the number of square root operations performed in the range reduction step, which can be as high as 22 for very large inputs. This could result in a loss of precision, particularly when combined with large rscale values, for which a large number of Taylor series terms is required (up to around 400). Fix by computing a few extra digits in the Taylor series, based on the weight of the multiplicative factor log10(2^(nsqrt+1)). It remains to be proven whether or not the other 8 extra digits used for the Taylor series is appropriate, but this at least deals with the obvious oversight of failing to account for the effects of the final multiplication. Per report from Justin AnyhowStep. Reviewed by Tom Lane. Discussion: https://postgr.es/m/[email protected]
The code path for multi-insert decoding is not stressed yet for catalogs (a future patch may introduce this capability), so no back-patch is needed. Author: Daniel Gustafsson Discussion: https://postgr.es/m/[email protected]
…nments When setting PG_COLOR to "always" or "auto" in a Windows terminal VT100-compatible, the colorization output was not showing up correctly because it is necessary to update the console's output handling mode. This fix allows to detect automatically if the environment is compatible with VT100. Hence, PG_COLOR=auto is able to detect and handle both compatible and non-compatible environments. The behavior of PG_COLOR=always remains unchanged, as it enforces the use of colorized output even if the environment does not allow it. This fix is based on an initial suggestion from Thomas Munro. Reported-by: Haiying Tang Author: Juan José Santamaría Flecha Reviewed-by: Michail Nikolaev, Michael Paquier, Haiying Tang Discussion: https://postgr.es/m/[email protected] Backpatch-through: 12
Reported-by: Juan José Santamaría Flecha <[email protected]> Discussion: https://www.postgresql.org/message-id/CAC%2BAXB1EDXiRPmiVfh%2BWX79x5vXJDU17k0GkDjfyPgOWO4Y5og%40mail.gmail.com
Copy some assertions from _bt_form_posting() to its sibling function, _bt_update_posting(). Discussion: https://postgr.es/m/CAH2-WzkPR8KMwkL0ap976kmXwBCeukTeHz6fB-U__wvuP1S9Zg@mail.gmail.com
Add a cast to size_t to silence "comparison between signed and unsigned integer expressions" cpluspluscheck warning. Reported-By: Tom Lane Discussion: https://postgr.es/m/[email protected]
Since commit 481c8e9 it tends to produce "error: sys/utime.h: No such file or directory" on non-Windows platforms.
The backend was using strings to represent command tags and doing string comparisons in multiple places, but that's slow and unhelpful. Create a new command list with a supporting structure to use instead; this is stored in a tag-list-file that can be tailored to specific purposes with a caller-definable C macro, similar to what we do for WAL resource managers. The first first such uses are a new CommandTag enum and a CommandTagBehavior struct. Replace numerous occurrences of char *completionTag with a QueryCompletion struct so that the code no longer stores information about completed queries in a cstring. Only at the last moment, in EndCommand(), does this get converted to a string. EventTriggerCacheItem no longer holds an array of palloc’d tag strings in sorted order, but rather just a Bitmapset over the CommandTags. Author: Mark Dilger, with unsolicited help from Álvaro Herrera Reviewed-by: John Naylor, Tom Lane Discussion: https://postgr.es/m/[email protected]
Thinko in 2f96613. Per buildfarm, as well as warning seen locally.
If the flag value is lost, a CLUSTER query following REINDEX CONCURRENTLY could fail. Non-concurrent REINDEX is already handling this case consistently. Author: Justin Pryzby Discussion: https://postgr.es/m/[email protected] Backpatch-through: 12
This commit adds pg_stat_progress_basebackup view that reports the progress while an application like pg_basebackup is taking a base backup. This uses the progress reporting infrastructure added by c16dc1a, adding support for streaming base backup. Bump catversion. Author: Fujii Masao Reviewed-by: Kyotaro Horiguchi, Amit Langote, Sergei Kornilov Discussion: https://postgr.es/m/[email protected]
Previously the documentation explains that WAL segment files start at 000000010000000000000000. But the first WAL segment file that initdb creates is 000000010000000000000001 not 000000010000000000000000. This change was caused by old commit 8c843ff, but the documentation had not been updated a long time. Back-patch to all supported branches. Author: Fujii Masao Reviewed-by: David Zhang Discussion: https://postgr.es/m/CAHGQGwHOmGe2OqGOmp8cOfNVDivq7dbV74L5nUGr+3eVd2CU2Q@mail.gmail.com
Using ALTER TABLE ATTACH PARTITION causes an assertion failure when attempting to work on a partitioned index, because partitioned indexes cannot have partition bounds. The grammar of ALTER TABLE ATTACH PARTITION requires partition bounds, but not ALTER INDEX, so mixing ALTER TABLE with partitioned indexes is confusing. Hence, on HEAD, prevent ALTER TABLE to attach a partition if the relation involved is a partitioned index. On back-branches, as applications may rely on the existing behavior, just remove the culprit assertion. Reported-by: Alexander Lakhin Author: Amit Langote, Michael Paquier Discussion: https://postgr.es/m/[email protected] Backpatch-through: 11
_bt_split() is passed NULL as its insertion scankey for internal page splits. Two recently added Assert() statements failed to consider this, leading to a crash with pg_upgrade'd BREE_VERSION < 4 indexes. Remove the assertions. The assertions in question were added by commit 0d861bb, which added nbtree deduplication. It would be possible to fix the assertions directly instead, but they weren't adding much anyway.
Author: Noriyoshi Shinoda
HAVE_WORKING_LINK is meant to indicate support for hard links, mainly for Windows. Here it is used for soft links on Unix, and the functionality is optional anyway, so we can just make it error out normally if needed. Discussion: https://www.postgresql.org/message-id/flat/72fff73f-dc9c-4ef4-83e8-d2e60c98df48%402ndquadrant.com
to_char() has long allowed the TM (translation mode) prefix to specify output of translated month or day names; but that prefix had no effect in input format strings. Now it does. to_date() and to_timestamp() will now recognize the same month or day names that to_char() would output for the same format code. Matching is case-insensitive (per the active collation's notion of what that means), just as it has always been for English month/day names without the TM prefix. (As per the discussion thread, there are lots of cases that this feature will not handle, such as alternate day names. But being able to accept what to_char() will output seems useful enough.) In passing, fix some shaky English and violations of message style guidelines in jsonpath errors for the .datetime() method, which depends on this code. Juan José Santamaría Flecha, reviewed and modified by me, with other commentary from Alvaro Herrera, Tomas Vondra, Arthur Zakirov, Peter Eisentraut, Mark Dilger. Discussion: https://postgr.es/m/CAC+AXB3u1jTngJcoC1nAHBf=M3v-jrEfo86UFtCqCjzbWS9QhA@mail.gmail.com
This way we can make use of it in other components as well, and it fits better with the rest of the build system. Discussion: https://www.postgresql.org/message-id/flat/72fff73f-dc9c-4ef4-83e8-d2e60c98df48%402ndquadrant.com
Our usual practice for "poor man's enum" catalog columns is to define macros for the possible values and use those, not literal constants, in C code. But for some reason lost in the mists of time, this was never done for typalign/attalign or typstorage/attstorage. It's never too late to make it better though, so let's do that. The reason I got interested in this right now is the need to duplicate some uses of the TYPSTORAGE constants in an upcoming ALTER TYPE patch. But in general, this sort of change aids greppability and readability, so it's a good idea even without any specific motivation. I may have missed a few places that could be converted, and it's even more likely that pending patches will re-introduce some hard-coded references. But that's not fatal --- there's no expectation that we'd actually change any of these values. We can clean up stragglers over time. Discussion: https://postgr.es/m/[email protected]
Optionally push a step to check for a NULL pointer to the pergroup state. This will be important for disk-based hash aggregation in combination with grouping sets. When memory limits are reached, a given tuple may find its per-group state for some grouping sets but not others. For the former, it advances the per-group state as normal; for the latter, it skips evaluation and the calling code will have to spill the tuple and reprocess it in a later batch. Add the NULL check as a separate expression step because in some common cases it's not needed. Discussion: https://postgr.es/m/20200221202212.ssb2qpmdgrnx52sj%40alap3.anarazel.de
…CURRENTLY When canceling a REINDEX CONCURRENTLY operation after swapping is done, a drop of the parent table would leave behind old indexes. This is a consequence of 68ac9cf, which fixed the case of pg_depend bloat when repeating REINDEX CONCURRENTLY on the same relation. In order to take care of the problem without breaking the previous fix, this uses a different strategy, possible even with the exiting set of routines to handle dependency changes. The dependencies of/on the new index are additionally switched to the old one, allowing an old invalid index remaining around because of a cancellation or a failure to use the dependency links of the concurrently-created index. This ensures that dropping any objects the old invalid index depends on also drops the old index automatically. Reported-by: Julien Rouhaud Author: Michael Paquier Reviewed-by: Julien Rouhaud Discussion: https://postgr.es/m/20200227080735.l32fqcauy73lon7o@nol Backpatch-through: 12
This commit fixes the following two issues around .pgpass file. (1) If the length of a line in .pgpass file was larger than 319B, libpq silently treated each 319B in the line as a separate setting line. (2) The document explains that a line beginning with # is treated as a comment in .pgpass. But there was no code doing such special handling. Whether a line begins with # or not, libpq just checked that the first token in the line match with the host. For (1), this commit makes libpq warn if the length of a line is larger than 319B, and throw away the remaining part beginning from 320B position. For (2), this commit changes libpq so that it treats any lines beginning with # as comments. Author: Fujii Masao Reviewed-by: Hamid Akhtar Discussion: https://postgr.es/m/[email protected]
This does not matter much when compiling Postgres proper as many warnings exist when enabling this compilation flag, but it can be annoying for external modules willing to use both. Author: David Steele Discussion: https://postgr.es/m/[email protected]
We used to carry the I/O function OID in RangeIOData, but it's not used for anything. Since the struct is not exposed to the world anyway, we can simplify it a bit. Also, rename the FmgrInfo member to match the accompanying 'typioparam' and put them in a more sensible order. Reviewed by Tom Lane and Paul Jungwirth. Discussion: https://postgr.es/m/[email protected]
Only internal page splits need to call _bt_pgaddtup() instead of PageAddItem(), and only for data items, one of which will end up at the first offset (or first offset after the high key offset) on the new right page. This data item alone will need to be truncated in _bt_pgaddtup(). Since there is no reason why retail inserts ever need to truncate the incoming item, use a raw PageAddItem() call there instead. Even _bt_split() uses raw PageAddItem() calls for left page and right page high keys. Clearly the _bt_pgaddtup() shim function wasn't really encapsulating anything. _bt_pgaddtup() should now be thought of as a _bt_split() helper function. Note that the assertions from commit d1e241c verify that retail inserts never insert an item at an internal page's negative infinity offset. This invariant could only ever be violated as a result of a basic logic error in nbtinsert.c.
…cle. Assert that we don't acquire any other heavyweight lock while holding the page lock except for relation extension. However, these locks are never taken in reverse order which implies that page locks will never participate in the deadlock cycle. Similar to relation extension, page locks are also held for a short duration, so imposing such a restriction won't hurt. Author: Dilip Kumar, with few changes by Amit Kapila Reviewed-by: Amit Kapila, Kuntal Ghosh and Sawada Masahiko Discussion: https://postgr.es/m/CAD21AoCmT3cFQUN4aVvzy5chw7DuzXrJCbrjTU05B+Ss=Gn1LA@mail.gmail.com
This commit renames RecoveryWalAll and RecoveryWalStream wait events to RecoveryWalStream and RecoveryRetrieveRetryInterval, respectively, in order to make the names and what they are more consistent. For example, previously RecoveryWalAll was reported as a wait event while the recovery was waiting for WAL from a stream, and which was confusing because the name was very different from the situation where the wait actually could happen. The names of macro variables for those wait events also are renamed accordingly. This commit also changes the category of RecoveryRetrieveRetryInterval to Timeout from Activity because the wait event is reported while waiting based on wal_retrieve_retry_interval. Author: Fujii Masao Reviewed-by: Kyotaro Horiguchi, Atsushi Torikoshi Discussion: https://postgr.es/m/[email protected]
This by itself doesn't change any functionality but prepares the way for having relations other than base tables in publications. Make arrangements for COPY handling the initial table sync. For non-tables we have to use COPY (SELECT ...) instead of directly copying from the table, but then we have to take care to omit generated columns from the column list. Also, remove a hardcoded reference to relkind = 'r' and rely on the publisher to send only what it can actually publish, which will be correct even in future cross-version scenarios. Reviewed-by: Amit Langote <[email protected]> Discussion: https://www.postgresql.org/message-id/flat/CA+HiwqH=Y85vRK3mOdjEkqFK+E=ST=eQiHdpj43L=_eJMOOznQ@mail.gmail.com
…by default. This commit changes pg_basebackup so that it specifies PROGRESS option in BASE_BACKUP replication command whether --progress is specified or not. This causes the server to estimate the total backup size and report it in pg_stat_progress_basebackup.backup_total, by default. This is reasonable default because the time required for the estimation would not be so large in most cases. Also this commit adds new option --no-estimate-size to pg_basebackup. This option prevents the server from the estimation, and so is useful to avoid such estimation time if it's too long. Author: Fujii Masao Reviewed-by: Magnus Hagander, Amit Langote Discussion: https://postgr.es/m/CABUevEyDPPSjP7KRvfTXPdqOdY5aWNkqsB5aAXs3bco5ZwtGHg@mail.gmail.com
This patch adds the pseudo-types anycompatible, anycompatiblearray, anycompatiblenonarray, and anycompatiblerange. They work much like anyelement, anyarray, anynonarray, and anyrange respectively, except that the actual input values need not match precisely in type. Instead, if we can find a common supertype (using the same rules as for UNION/CASE type resolution), then the parser automatically promotes the input values to that type. For example, "myfunc(anycompatible, anycompatible)" can match a call with one integer and one bigint argument, with the integer automatically promoted to bigint. With anyelement in the definition, the user would have had to cast the integer explicitly. The new types also provide a second, independent set of type variables for function matching; thus with "myfunc(anyelement, anyelement, anycompatible) returns anycompatible" the first two arguments are constrained to be the same type, but the third can be some other type, and the result has the type of the third argument. The need for more than one set of type variables was foreseen back when we first invented the polymorphic types, but we never did anything about it. Pavel Stehule, revised a bit by me Discussion: https://postgr.es/m/CAFj8pRDna7VqNi8gR+Tt2Ktmz0cq5G93guc3Sbn_NVPLdXAkqA@mail.gmail.com
This patch fixes the error message in get_major_server_version() to be "could not parse version file", and uses the full file path name, rather than just the data directory path. Also, commit 4109bb5 added the cause of the failure to the "could not open" error message, and improved quoting. This patch backpatches the "could not open" cause to PG 12, where it was first widely used, and backpatches the quoting fix in that patch to all supported releases. Reported-by: Tom Lane Discussion: https://postgr.es/m/[email protected] Author: Dagfinn Ilmari Mannsåker Backpatch-through: 9.5
This reverts commit e00912e.
Remove comments that are a throw back to a time when nbtree cared about write-ordering dependencies. The comments are similar to those removed by commit 9ee7414, among others.
This is required as it is no safer for two related processes to extend the same relation at a time than for unrelated processes to do the same. We don't acquire a heavyweight lock on any other object after relation extension lock which means such a lock can never participate in the deadlock cycle. So, avoid checking wait edges from this lock. This provides an infrastructure to allow parallel operations like insert, copy, etc. which were earlier not possible as parallel group members won't conflict for relation extension lock. Author: Dilip Kumar, Amit Kapila Reviewed-by: Amit Kapila, Kuntal Ghosh and Sawada Masahiko Discussion: https://postgr.es/m/CAD21AoCmT3cFQUN4aVvzy5chw7DuzXrJCbrjTU05B+Ss=Gn1LA@mail.gmail.com
wjzeng
pushed a commit
that referenced
this pull request
Jul 9, 2021
Due to how pg_size_pretty(bigint) was implemented, it's possible that when given a negative number of bytes that the returning value would not match the equivalent positive return value when given the equivalent positive number of bytes. This was due to two separate issues. 1. The function used bit shifting to convert the number of bytes into larger units. The rounding performed by bit shifting is not the same as dividing. For example -3 >> 1 = -2, but -3 / 2 = -1. These two operations are only equivalent with positive numbers. 2. The half_rounded() macro rounded towards positive infinity. This meant that negative numbers rounded towards zero and positive numbers rounded away from zero. Here we fix #1 by dividing the values instead of bit shifting. We fix #2 by adjusting the half_rounded macro always to round away from zero. Additionally, adjust the pg_size_pretty(numeric) function to be more explicit that it's using division rather than bit shifting. A casual observer might have believed bit shifting was used due to a static function being named numeric_shift_right. However, that function was calculating the divisor from the number of bits and performed division. Here we make that more clear. This change is just cosmetic and does not affect the return value of the numeric version of the function. Here we also add a set of regression tests both versions of pg_size_pretty() which test the values directly before and after the function switches to the next unit. This bug was introduced in 8a1fab3. Prior to that negative values were always displayed in bytes. Author: Dean Rasheed, David Rowley Discussion: https://postgr.es/m/CAEZATCXnNW4HsmZnxhfezR5FuiGgp+mkY4AzcL5eRGO4fuadWg@mail.gmail.com Backpatch-through: 9.6, where the bug was introduced.
wjzeng
pushed a commit
that referenced
this pull request
Aug 13, 2021
Due to how pg_size_pretty(bigint) was implemented, it's possible that when given a negative number of bytes that the returning value would not match the equivalent positive return value when given the equivalent positive number of bytes. This was due to two separate issues. 1. The function used bit shifting to convert the number of bytes into larger units. The rounding performed by bit shifting is not the same as dividing. For example -3 >> 1 = -2, but -3 / 2 = -1. These two operations are only equivalent with positive numbers. 2. The half_rounded() macro rounded towards positive infinity. This meant that negative numbers rounded towards zero and positive numbers rounded away from zero. Here we fix #1 by dividing the values instead of bit shifting. We fix #2 by adjusting the half_rounded macro always to round away from zero. Additionally, adjust the pg_size_pretty(numeric) function to be more explicit that it's using division rather than bit shifting. A casual observer might have believed bit shifting was used due to a static function being named numeric_shift_right. However, that function was calculating the divisor from the number of bits and performed division. Here we make that more clear. This change is just cosmetic and does not affect the return value of the numeric version of the function. Here we also add a set of regression tests both versions of pg_size_pretty() which test the values directly before and after the function switches to the next unit. This bug was introduced in 8a1fab3. Prior to that negative values were always displayed in bytes. Author: Dean Rasheed, David Rowley Discussion: https://postgr.es/m/CAEZATCXnNW4HsmZnxhfezR5FuiGgp+mkY4AzcL5eRGO4fuadWg@mail.gmail.com Backpatch-through: 9.6, where the bug was introduced.
wjzeng
pushed a commit
that referenced
this pull request
Aug 13, 2021
Due to how pg_size_pretty(bigint) was implemented, it's possible that when given a negative number of bytes that the returning value would not match the equivalent positive return value when given the equivalent positive number of bytes. This was due to two separate issues. 1. The function used bit shifting to convert the number of bytes into larger units. The rounding performed by bit shifting is not the same as dividing. For example -3 >> 1 = -2, but -3 / 2 = -1. These two operations are only equivalent with positive numbers. 2. The half_rounded() macro rounded towards positive infinity. This meant that negative numbers rounded towards zero and positive numbers rounded away from zero. Here we fix #1 by dividing the values instead of bit shifting. We fix #2 by adjusting the half_rounded macro always to round away from zero. Additionally, adjust the pg_size_pretty(numeric) function to be more explicit that it's using division rather than bit shifting. A casual observer might have believed bit shifting was used due to a static function being named numeric_shift_right. However, that function was calculating the divisor from the number of bits and performed division. Here we make that more clear. This change is just cosmetic and does not affect the return value of the numeric version of the function. Here we also add a set of regression tests both versions of pg_size_pretty() which test the values directly before and after the function switches to the next unit. This bug was introduced in 8a1fab3. Prior to that negative values were always displayed in bytes. Author: Dean Rasheed, David Rowley Discussion: https://postgr.es/m/CAEZATCXnNW4HsmZnxhfezR5FuiGgp+mkY4AzcL5eRGO4fuadWg@mail.gmail.com Backpatch-through: 9.6, where the bug was introduced.
wjzeng
pushed a commit
that referenced
this pull request
Aug 13, 2021
Due to how pg_size_pretty(bigint) was implemented, it's possible that when given a negative number of bytes that the returning value would not match the equivalent positive return value when given the equivalent positive number of bytes. This was due to two separate issues. 1. The function used bit shifting to convert the number of bytes into larger units. The rounding performed by bit shifting is not the same as dividing. For example -3 >> 1 = -2, but -3 / 2 = -1. These two operations are only equivalent with positive numbers. 2. The half_rounded() macro rounded towards positive infinity. This meant that negative numbers rounded towards zero and positive numbers rounded away from zero. Here we fix #1 by dividing the values instead of bit shifting. We fix #2 by adjusting the half_rounded macro always to round away from zero. Additionally, adjust the pg_size_pretty(numeric) function to be more explicit that it's using division rather than bit shifting. A casual observer might have believed bit shifting was used due to a static function being named numeric_shift_right. However, that function was calculating the divisor from the number of bits and performed division. Here we make that more clear. This change is just cosmetic and does not affect the return value of the numeric version of the function. Here we also add a set of regression tests both versions of pg_size_pretty() which test the values directly before and after the function switches to the next unit. This bug was introduced in 8a1fab3. Prior to that negative values were always displayed in bytes. Author: Dean Rasheed, David Rowley Discussion: https://postgr.es/m/CAEZATCXnNW4HsmZnxhfezR5FuiGgp+mkY4AzcL5eRGO4fuadWg@mail.gmail.com Backpatch-through: 9.6, where the bug was introduced.
wjzeng
pushed a commit
that referenced
this pull request
Aug 13, 2021
Due to how pg_size_pretty(bigint) was implemented, it's possible that when given a negative number of bytes that the returning value would not match the equivalent positive return value when given the equivalent positive number of bytes. This was due to two separate issues. 1. The function used bit shifting to convert the number of bytes into larger units. The rounding performed by bit shifting is not the same as dividing. For example -3 >> 1 = -2, but -3 / 2 = -1. These two operations are only equivalent with positive numbers. 2. The half_rounded() macro rounded towards positive infinity. This meant that negative numbers rounded towards zero and positive numbers rounded away from zero. Here we fix #1 by dividing the values instead of bit shifting. We fix #2 by adjusting the half_rounded macro always to round away from zero. Additionally, adjust the pg_size_pretty(numeric) function to be more explicit that it's using division rather than bit shifting. A casual observer might have believed bit shifting was used due to a static function being named numeric_shift_right. However, that function was calculating the divisor from the number of bits and performed division. Here we make that more clear. This change is just cosmetic and does not affect the return value of the numeric version of the function. Here we also add a set of regression tests both versions of pg_size_pretty() which test the values directly before and after the function switches to the next unit. This bug was introduced in 8a1fab3. Prior to that negative values were always displayed in bytes. Author: Dean Rasheed, David Rowley Discussion: https://postgr.es/m/CAEZATCXnNW4HsmZnxhfezR5FuiGgp+mkY4AzcL5eRGO4fuadWg@mail.gmail.com Backpatch-through: 9.6, where the bug was introduced.
wjzeng
pushed a commit
that referenced
this pull request
Sep 3, 2021
Currently the pc files use hard coded paths for "includedir" and "libdir." Example: Cflags: -I/usr/include Libs: -L/usr/lib -lpq This is not very fortunate when cross compiling inside a buildroot, where the includes and libs are inside a staging directory, because this introduces host paths into the build: checking for pkg-config... /builder/shared-workdir/build/sdk/staging_dir/host/bin/pkg-config checking for PostgreSQL libraries via pkg_config... -L/usr/lib <---- This commit addresses this by doing the following two things: 1. Instead of hard coding the paths in "Cflags" and "Libs" "${includedir}" and "${libdir}" are used. Note: these variables can be overriden on the pkg-config command line ("--define-variable=libdir=/some/path"). 2. Add the variables "prefix" and "exec_prefix". If "includedir" and/or "libdir" are using these then construct them accordingly. This is done because buildroots (for instance OpenWrt) tend to rename the real pkg-config and call it indirectly from a script that sets "prefix", "exec_prefix" and "bindir", like so: pkg-config.real --define-variable=prefix=${STAGING_PREFIX} \ --define-variable=exec_prefix=${STAGING_PREFIX} \ --define-variable=bindir=${STAGING_PREFIX}/bin $@ Example #1: user calls ./configure with "--libdir=/some/lib" and "--includedir=/some/include": prefix=/usr/local/pgsql exec_prefix=${prefix} libdir=/some/lib includedir=/some/include Name: libpq Description: PostgreSQL libpq library Url: http://www.postgresql.org/ Version: 12.1 Requires: Requires.private: Cflags: -I${includedir} Libs: -L${libdir} -lpq Libs.private: -lcrypt -lm Example #2: user calls ./configure with no arguments: prefix=/usr/local/pgsql exec_prefix=${prefix} libdir=${exec_prefix}/lib includedir=${prefix}/include Name: libpq Description: PostgreSQL libpq library Url: http://www.postgresql.org/ Version: 12.1 Requires: Requires.private: Cflags: -I${includedir} Libs: -L${libdir} -lpq Libs.private: -lcrypt -lm Like this the paths can be forced into the staging directory when using a buildroot setup: checking for pkg-config... /home/sk/tmp/openwrt/staging_dir/host/bin/pkg-config checking for PostgreSQL libraries via pkg_config... -L/home/sk/tmp/openwrt/staging_dir/target-mips_24kc_musl/usr/lib Author: Sebastian Kemper <[email protected]> Co-authored-by: Peter Eisentraut <[email protected]> Discussion: https://www.postgresql.org/message-id/flat/20200305213827.GA25135%40darth.lan
wjzeng
pushed a commit
that referenced
this pull request
Jan 9, 2023
Remove four probes for members of sockaddr_storage. Keep only the probe for sockaddr's sa_len, which is enough for our two remaining places that know about _len fields: 1. ifaddr.c needs to know if sockaddr has sa_len to understand the result of ioctl(SIOCGIFCONF). Only AIX is still using the relevant code today, but it seems like a good idea to keep it compilable on Linux. 2. ip.c was testing for presence of ss_len to decide whether to fill in sun_len in our getaddrinfo_unix() function. It's just as good to test for sa_len. If you have one, you have them all. (The code in #2 isn't actually needed at all on several OSes I checked since modern versions ignore sa_len on input to system calls. Proving that's the case for all relevant OSes is left for another day, but wouldn't get rid of that last probe anyway if we still want it for #1.) Discussion: https://postgr.es/m/CA%2BhUKGJJjF2AqdU_Aug5n2MAc1gr%3DGykNjVBZq%2Bd6Jrcp3Dyvg%40mail.gmail.com
wjzeng
pushed a commit
that referenced
this pull request
Sep 10, 2023
Commit e7cb7ee, which introduced the infrastructure for FDWs and custom scan providers to replace joins with scans, failed to add support handling of pseudoconstant quals assigned to replaced joins in createplan.c, leading to an incorrect plan without a gating Result node when postgres_fdw replaced a join with such a qual. To fix, we could add the support by 1) modifying the ForeignPath and CustomPath structs to store the list of RestrictInfo nodes to apply to the join, as in JoinPaths, if they represent foreign and custom scans replacing a join with a scan, and by 2) modifying create_scan_plan() in createplan.c to use that list in that case, instead of the baserestrictinfo list, to get pseudoconstant quals assigned to the join; but #1 would cause an ABI break. So fix by modifying the infrastructure to just disallow replacing joins with such quals. Back-patch to all supported branches. Reported by Nishant Sharma. Patch by me, reviewed by Nishant Sharma and Richard Guo. Discussion: https://postgr.es/m/CADrsxdbcN1vejBaf8a%2BQhrZY5PXL-04mCd4GDu6qm6FigDZd6Q%40mail.gmail.com
wjzeng
pushed a commit
that referenced
this pull request
Oct 24, 2023
Commit e7cb7ee, which introduced the infrastructure for FDWs and custom scan providers to replace joins with scans, failed to add support handling of pseudoconstant quals assigned to replaced joins in createplan.c, leading to an incorrect plan without a gating Result node when postgres_fdw replaced a join with such a qual. To fix, we could add the support by 1) modifying the ForeignPath and CustomPath structs to store the list of RestrictInfo nodes to apply to the join, as in JoinPaths, if they represent foreign and custom scans replacing a join with a scan, and by 2) modifying create_scan_plan() in createplan.c to use that list in that case, instead of the baserestrictinfo list, to get pseudoconstant quals assigned to the join; but #1 would cause an ABI break. So fix by modifying the infrastructure to just disallow replacing joins with such quals. Back-patch to all supported branches. Reported by Nishant Sharma. Patch by me, reviewed by Nishant Sharma and Richard Guo. Discussion: https://postgr.es/m/CADrsxdbcN1vejBaf8a%2BQhrZY5PXL-04mCd4GDu6qm6FigDZd6Q%40mail.gmail.com
wjzeng
pushed a commit
that referenced
this pull request
Nov 24, 2023
Default privileges are represented as NULL::aclitem[] in catalog ACL columns, while revoking all privileges leaves an empty aclitem[]. These two cases used to produce identical output in psql meta-commands like \dp. Using something like "\pset null '(default)'" as a workaround for spotting the difference did not work, because null values were always displayed as empty strings by describe.c's meta-commands. This patch improves that with two changes: 1. Print "(none)" for empty privileges so that the user is able to distinguish them from default privileges, even without special workarounds. 2. Remove the special handling of null values in describe.c, so that "\pset null" is honored like everywhere else. (This affects all output from these commands, not only ACLs.) The privileges shown by \dconfig+ and \ddp as well as the column privileges shown by \dp are not affected by change #1, because the respective aclitem[] is reset to NULL or deleted from the catalog instead of leaving an empty array. Erik Wienhold and Laurenz Albe Discussion: https://postgr.es/m/[email protected]
wjzeng
pushed a commit
that referenced
this pull request
Jun 25, 2024
Commit e7cb7ee, which introduced the infrastructure for FDWs and custom scan providers to replace joins with scans, failed to add support handling of pseudoconstant quals assigned to replaced joins in createplan.c, leading to an incorrect plan without a gating Result node when postgres_fdw replaced a join with such a qual. To fix, we could add the support by 1) modifying the ForeignPath and CustomPath structs to store the list of RestrictInfo nodes to apply to the join, as in JoinPaths, if they represent foreign and custom scans replacing a join with a scan, and by 2) modifying create_scan_plan() in createplan.c to use that list in that case, instead of the baserestrictinfo list, to get pseudoconstant quals assigned to the join; but #1 would cause an ABI break. So fix by modifying the infrastructure to just disallow replacing joins with such quals. Back-patch to all supported branches. Reported by Nishant Sharma. Patch by me, reviewed by Nishant Sharma and Richard Guo. Discussion: https://postgr.es/m/CADrsxdbcN1vejBaf8a%2BQhrZY5PXL-04mCd4GDu6qm6FigDZd6Q%40mail.gmail.com
wjzeng
pushed a commit
that referenced
this pull request
Jul 2, 2024
1. TruncateMultiXact() performs the SLRU truncations in a critical section. Deleting the SLRU segments calls ForwardSyncRequest(), which will try to compact the request queue if it's full (CompactCheckpointerRequestQueue()). That in turn allocates memory, which is not allowed in a critical section. Backtrace: TRAP: failed Assert("CritSectionCount == 0 || (context)->allowInCritSection"), File: "../src/backend/utils/mmgr/mcxt.c", Line: 1353, PID: 920981 postgres: autovacuum worker template0(ExceptionalCondition+0x6e)[0x560a501e866e] postgres: autovacuum worker template0(+0x5dce3d)[0x560a50217e3d] postgres: autovacuum worker template0(ForwardSyncRequest+0x8e)[0x560a4ffec95e] postgres: autovacuum worker template0(RegisterSyncRequest+0x2b)[0x560a50091eeb] postgres: autovacuum worker template0(+0x187b0a)[0x560a4fdc2b0a] postgres: autovacuum worker template0(SlruDeleteSegment+0x101)[0x560a4fdc2ab1] postgres: autovacuum worker template0(TruncateMultiXact+0x2fb)[0x560a4fdbde1b] postgres: autovacuum worker template0(vac_update_datfrozenxid+0x4b3)[0x560a4febd2f3] postgres: autovacuum worker template0(+0x3adf66)[0x560a4ffe8f66] postgres: autovacuum worker template0(AutoVacWorkerMain+0x3ed)[0x560a4ffe7c2d] postgres: autovacuum worker template0(+0x3b1ead)[0x560a4ffecead] postgres: autovacuum worker template0(+0x3b620e)[0x560a4fff120e] postgres: autovacuum worker template0(+0x3b3fbb)[0x560a4ffeefbb] postgres: autovacuum worker template0(+0x2f724e)[0x560a4ff3224e] /lib/x86_64-linux-gnu/libc.so.6(+0x27c8a)[0x7f62cc642c8a] /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0x85)[0x7f62cc642d45] postgres: autovacuum worker template0(_start+0x21)[0x560a4fd16f31] To fix, bail out in CompactCheckpointerRequestQueue() without doing anything, if it's called in a critical section. That covers the above call path, as well as any other similar cases where RegisterSyncRequest might be called in a critical section. 2. After fixing that, another problem became apparent: Autovacuum process doing that truncation can deadlock with the checkpointer process. TruncateMultiXact() sets "MyProc->delayChkptFlags |= DELAY_CHKPT_START". If the sync request queue is full and cannot be compacted, the process will repeatedly sleep and retry, until there is room in the queue. However, if the checkpointer is trying to start a checkpoint at the same time, and is waiting for the DELAY_CHKPT_START processes to finish, the queue will never shrink. More concretely, the autovacuum process is stuck here: #0 0x00007fc934926dc3 in epoll_wait () from /lib/x86_64-linux-gnu/libc.so.6 #1 0x000056220b24348b in WaitEventSetWaitBlock (set=0x56220c2e4b50, occurred_events=0x7ffe7856d040, nevents=1, cur_timeout=<optimized out>) at ../src/backend/storage/ipc/latch.c:1570 #2 WaitEventSetWait (set=0x56220c2e4b50, timeout=timeout@entry=10, occurred_events=<optimized out>, occurred_events@entry=0x7ffe7856d040, nevents=nevents@entry=1, wait_event_info=wait_event_info@entry=150994949) at ../src/backend/storage/ipc/latch.c:1516 #3 0x000056220b243224 in WaitLatch (latch=<optimized out>, latch@entry=0x0, wakeEvents=wakeEvents@entry=40, timeout=timeout@entry=10, wait_event_info=wait_event_info@entry=150994949) at ../src/backend/storage/ipc/latch.c:538 #4 0x000056220b26cf46 in RegisterSyncRequest (ftag=ftag@entry=0x7ffe7856d0a0, type=type@entry=SYNC_FORGET_REQUEST, retryOnError=true) at ../src/backend/storage/sync/sync.c:614 #5 0x000056220af9db0a in SlruInternalDeleteSegment (ctl=ctl@entry=0x56220b7beb60 <MultiXactMemberCtlData>, segno=segno@entry=11350) at ../src/backend/access/transam/slru.c:1495 #6 0x000056220af9dab1 in SlruDeleteSegment (ctl=ctl@entry=0x56220b7beb60 <MultiXactMemberCtlData>, segno=segno@entry=11350) at ../src/backend/access/transam/slru.c:1566 #7 0x000056220af98e1b in PerformMembersTruncation (oldestOffset=<optimized out>, newOldestOffset=<optimized out>) at ../src/backend/access/transam/multixact.c:3006 #8 TruncateMultiXact (newOldestMulti=newOldestMulti@entry=3221225472, newOldestMultiDB=newOldestMultiDB@entry=4) at ../src/backend/access/transam/multixact.c:3201 #9 0x000056220b098303 in vac_truncate_clog (frozenXID=749, minMulti=<optimized out>, lastSaneFrozenXid=749, lastSaneMinMulti=3221225472) at ../src/backend/commands/vacuum.c:1917 #10 vac_update_datfrozenxid () at ../src/backend/commands/vacuum.c:1760 #11 0x000056220b1c3f76 in do_autovacuum () at ../src/backend/postmaster/autovacuum.c:2550 #12 0x000056220b1c2c3d in AutoVacWorkerMain (startup_data=<optimized out>, startup_data_len=<optimized out>) at ../src/backend/postmaster/autovacuum.c:1569 and the checkpointer is stuck here: #0 0x00007fc9348ebf93 in clock_nanosleep () from /lib/x86_64-linux-gnu/libc.so.6 #1 0x00007fc9348fe353 in nanosleep () from /lib/x86_64-linux-gnu/libc.so.6 #2 0x000056220b40ecb4 in pg_usleep (microsec=microsec@entry=10000) at ../src/port/pgsleep.c:50 #3 0x000056220afb43c3 in CreateCheckPoint (flags=flags@entry=108) at ../src/backend/access/transam/xlog.c:7098 #4 0x000056220b1c6e86 in CheckpointerMain (startup_data=<optimized out>, startup_data_len=<optimized out>) at ../src/backend/postmaster/checkpointer.c:464 To fix, add AbsorbSyncRequests() to the loops where the checkpointer waits for DELAY_CHKPT_START or DELAY_CHKPT_COMPLETE operations to finish. Backpatch to v14. Before that, SLRU deletion didn't call RegisterSyncRequest, which avoided this failure. I'm not sure if there are other similar scenarios on older versions, but we haven't had any such reports. Discussion: https://www.postgresql.org/message-id/[email protected]
wjzeng
pushed a commit
that referenced
this pull request
Jul 2, 2024
1. TruncateMultiXact() performs the SLRU truncations in a critical section. Deleting the SLRU segments calls ForwardSyncRequest(), which will try to compact the request queue if it's full (CompactCheckpointerRequestQueue()). That in turn allocates memory, which is not allowed in a critical section. Backtrace: TRAP: failed Assert("CritSectionCount == 0 || (context)->allowInCritSection"), File: "../src/backend/utils/mmgr/mcxt.c", Line: 1353, PID: 920981 postgres: autovacuum worker template0(ExceptionalCondition+0x6e)[0x560a501e866e] postgres: autovacuum worker template0(+0x5dce3d)[0x560a50217e3d] postgres: autovacuum worker template0(ForwardSyncRequest+0x8e)[0x560a4ffec95e] postgres: autovacuum worker template0(RegisterSyncRequest+0x2b)[0x560a50091eeb] postgres: autovacuum worker template0(+0x187b0a)[0x560a4fdc2b0a] postgres: autovacuum worker template0(SlruDeleteSegment+0x101)[0x560a4fdc2ab1] postgres: autovacuum worker template0(TruncateMultiXact+0x2fb)[0x560a4fdbde1b] postgres: autovacuum worker template0(vac_update_datfrozenxid+0x4b3)[0x560a4febd2f3] postgres: autovacuum worker template0(+0x3adf66)[0x560a4ffe8f66] postgres: autovacuum worker template0(AutoVacWorkerMain+0x3ed)[0x560a4ffe7c2d] postgres: autovacuum worker template0(+0x3b1ead)[0x560a4ffecead] postgres: autovacuum worker template0(+0x3b620e)[0x560a4fff120e] postgres: autovacuum worker template0(+0x3b3fbb)[0x560a4ffeefbb] postgres: autovacuum worker template0(+0x2f724e)[0x560a4ff3224e] /lib/x86_64-linux-gnu/libc.so.6(+0x27c8a)[0x7f62cc642c8a] /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0x85)[0x7f62cc642d45] postgres: autovacuum worker template0(_start+0x21)[0x560a4fd16f31] To fix, bail out in CompactCheckpointerRequestQueue() without doing anything, if it's called in a critical section. That covers the above call path, as well as any other similar cases where RegisterSyncRequest might be called in a critical section. 2. After fixing that, another problem became apparent: Autovacuum process doing that truncation can deadlock with the checkpointer process. TruncateMultiXact() sets "MyProc->delayChkptFlags |= DELAY_CHKPT_START". If the sync request queue is full and cannot be compacted, the process will repeatedly sleep and retry, until there is room in the queue. However, if the checkpointer is trying to start a checkpoint at the same time, and is waiting for the DELAY_CHKPT_START processes to finish, the queue will never shrink. More concretely, the autovacuum process is stuck here: #0 0x00007fc934926dc3 in epoll_wait () from /lib/x86_64-linux-gnu/libc.so.6 #1 0x000056220b24348b in WaitEventSetWaitBlock (set=0x56220c2e4b50, occurred_events=0x7ffe7856d040, nevents=1, cur_timeout=<optimized out>) at ../src/backend/storage/ipc/latch.c:1570 #2 WaitEventSetWait (set=0x56220c2e4b50, timeout=timeout@entry=10, occurred_events=<optimized out>, occurred_events@entry=0x7ffe7856d040, nevents=nevents@entry=1, wait_event_info=wait_event_info@entry=150994949) at ../src/backend/storage/ipc/latch.c:1516 #3 0x000056220b243224 in WaitLatch (latch=<optimized out>, latch@entry=0x0, wakeEvents=wakeEvents@entry=40, timeout=timeout@entry=10, wait_event_info=wait_event_info@entry=150994949) at ../src/backend/storage/ipc/latch.c:538 #4 0x000056220b26cf46 in RegisterSyncRequest (ftag=ftag@entry=0x7ffe7856d0a0, type=type@entry=SYNC_FORGET_REQUEST, retryOnError=true) at ../src/backend/storage/sync/sync.c:614 #5 0x000056220af9db0a in SlruInternalDeleteSegment (ctl=ctl@entry=0x56220b7beb60 <MultiXactMemberCtlData>, segno=segno@entry=11350) at ../src/backend/access/transam/slru.c:1495 #6 0x000056220af9dab1 in SlruDeleteSegment (ctl=ctl@entry=0x56220b7beb60 <MultiXactMemberCtlData>, segno=segno@entry=11350) at ../src/backend/access/transam/slru.c:1566 #7 0x000056220af98e1b in PerformMembersTruncation (oldestOffset=<optimized out>, newOldestOffset=<optimized out>) at ../src/backend/access/transam/multixact.c:3006 #8 TruncateMultiXact (newOldestMulti=newOldestMulti@entry=3221225472, newOldestMultiDB=newOldestMultiDB@entry=4) at ../src/backend/access/transam/multixact.c:3201 #9 0x000056220b098303 in vac_truncate_clog (frozenXID=749, minMulti=<optimized out>, lastSaneFrozenXid=749, lastSaneMinMulti=3221225472) at ../src/backend/commands/vacuum.c:1917 #10 vac_update_datfrozenxid () at ../src/backend/commands/vacuum.c:1760 #11 0x000056220b1c3f76 in do_autovacuum () at ../src/backend/postmaster/autovacuum.c:2550 #12 0x000056220b1c2c3d in AutoVacWorkerMain (startup_data=<optimized out>, startup_data_len=<optimized out>) at ../src/backend/postmaster/autovacuum.c:1569 and the checkpointer is stuck here: #0 0x00007fc9348ebf93 in clock_nanosleep () from /lib/x86_64-linux-gnu/libc.so.6 #1 0x00007fc9348fe353 in nanosleep () from /lib/x86_64-linux-gnu/libc.so.6 #2 0x000056220b40ecb4 in pg_usleep (microsec=microsec@entry=10000) at ../src/port/pgsleep.c:50 #3 0x000056220afb43c3 in CreateCheckPoint (flags=flags@entry=108) at ../src/backend/access/transam/xlog.c:7098 #4 0x000056220b1c6e86 in CheckpointerMain (startup_data=<optimized out>, startup_data_len=<optimized out>) at ../src/backend/postmaster/checkpointer.c:464 To fix, add AbsorbSyncRequests() to the loops where the checkpointer waits for DELAY_CHKPT_START or DELAY_CHKPT_COMPLETE operations to finish. Backpatch to v14. Before that, SLRU deletion didn't call RegisterSyncRequest, which avoided this failure. I'm not sure if there are other similar scenarios on older versions, but we haven't had any such reports. Discussion: https://www.postgresql.org/message-id/[email protected]
wjzeng
pushed a commit
that referenced
this pull request
Jul 9, 2024
1. TruncateMultiXact() performs the SLRU truncations in a critical section. Deleting the SLRU segments calls ForwardSyncRequest(), which will try to compact the request queue if it's full (CompactCheckpointerRequestQueue()). That in turn allocates memory, which is not allowed in a critical section. Backtrace: TRAP: failed Assert("CritSectionCount == 0 || (context)->allowInCritSection"), File: "../src/backend/utils/mmgr/mcxt.c", Line: 1353, PID: 920981 postgres: autovacuum worker template0(ExceptionalCondition+0x6e)[0x560a501e866e] postgres: autovacuum worker template0(+0x5dce3d)[0x560a50217e3d] postgres: autovacuum worker template0(ForwardSyncRequest+0x8e)[0x560a4ffec95e] postgres: autovacuum worker template0(RegisterSyncRequest+0x2b)[0x560a50091eeb] postgres: autovacuum worker template0(+0x187b0a)[0x560a4fdc2b0a] postgres: autovacuum worker template0(SlruDeleteSegment+0x101)[0x560a4fdc2ab1] postgres: autovacuum worker template0(TruncateMultiXact+0x2fb)[0x560a4fdbde1b] postgres: autovacuum worker template0(vac_update_datfrozenxid+0x4b3)[0x560a4febd2f3] postgres: autovacuum worker template0(+0x3adf66)[0x560a4ffe8f66] postgres: autovacuum worker template0(AutoVacWorkerMain+0x3ed)[0x560a4ffe7c2d] postgres: autovacuum worker template0(+0x3b1ead)[0x560a4ffecead] postgres: autovacuum worker template0(+0x3b620e)[0x560a4fff120e] postgres: autovacuum worker template0(+0x3b3fbb)[0x560a4ffeefbb] postgres: autovacuum worker template0(+0x2f724e)[0x560a4ff3224e] /lib/x86_64-linux-gnu/libc.so.6(+0x27c8a)[0x7f62cc642c8a] /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0x85)[0x7f62cc642d45] postgres: autovacuum worker template0(_start+0x21)[0x560a4fd16f31] To fix, bail out in CompactCheckpointerRequestQueue() without doing anything, if it's called in a critical section. That covers the above call path, as well as any other similar cases where RegisterSyncRequest might be called in a critical section. 2. After fixing that, another problem became apparent: Autovacuum process doing that truncation can deadlock with the checkpointer process. TruncateMultiXact() sets "MyProc->delayChkptFlags |= DELAY_CHKPT_START". If the sync request queue is full and cannot be compacted, the process will repeatedly sleep and retry, until there is room in the queue. However, if the checkpointer is trying to start a checkpoint at the same time, and is waiting for the DELAY_CHKPT_START processes to finish, the queue will never shrink. More concretely, the autovacuum process is stuck here: #0 0x00007fc934926dc3 in epoll_wait () from /lib/x86_64-linux-gnu/libc.so.6 #1 0x000056220b24348b in WaitEventSetWaitBlock (set=0x56220c2e4b50, occurred_events=0x7ffe7856d040, nevents=1, cur_timeout=<optimized out>) at ../src/backend/storage/ipc/latch.c:1570 #2 WaitEventSetWait (set=0x56220c2e4b50, timeout=timeout@entry=10, occurred_events=<optimized out>, occurred_events@entry=0x7ffe7856d040, nevents=nevents@entry=1, wait_event_info=wait_event_info@entry=150994949) at ../src/backend/storage/ipc/latch.c:1516 #3 0x000056220b243224 in WaitLatch (latch=<optimized out>, latch@entry=0x0, wakeEvents=wakeEvents@entry=40, timeout=timeout@entry=10, wait_event_info=wait_event_info@entry=150994949) at ../src/backend/storage/ipc/latch.c:538 #4 0x000056220b26cf46 in RegisterSyncRequest (ftag=ftag@entry=0x7ffe7856d0a0, type=type@entry=SYNC_FORGET_REQUEST, retryOnError=true) at ../src/backend/storage/sync/sync.c:614 #5 0x000056220af9db0a in SlruInternalDeleteSegment (ctl=ctl@entry=0x56220b7beb60 <MultiXactMemberCtlData>, segno=segno@entry=11350) at ../src/backend/access/transam/slru.c:1495 #6 0x000056220af9dab1 in SlruDeleteSegment (ctl=ctl@entry=0x56220b7beb60 <MultiXactMemberCtlData>, segno=segno@entry=11350) at ../src/backend/access/transam/slru.c:1566 #7 0x000056220af98e1b in PerformMembersTruncation (oldestOffset=<optimized out>, newOldestOffset=<optimized out>) at ../src/backend/access/transam/multixact.c:3006 #8 TruncateMultiXact (newOldestMulti=newOldestMulti@entry=3221225472, newOldestMultiDB=newOldestMultiDB@entry=4) at ../src/backend/access/transam/multixact.c:3201 #9 0x000056220b098303 in vac_truncate_clog (frozenXID=749, minMulti=<optimized out>, lastSaneFrozenXid=749, lastSaneMinMulti=3221225472) at ../src/backend/commands/vacuum.c:1917 #10 vac_update_datfrozenxid () at ../src/backend/commands/vacuum.c:1760 #11 0x000056220b1c3f76 in do_autovacuum () at ../src/backend/postmaster/autovacuum.c:2550 #12 0x000056220b1c2c3d in AutoVacWorkerMain (startup_data=<optimized out>, startup_data_len=<optimized out>) at ../src/backend/postmaster/autovacuum.c:1569 and the checkpointer is stuck here: #0 0x00007fc9348ebf93 in clock_nanosleep () from /lib/x86_64-linux-gnu/libc.so.6 #1 0x00007fc9348fe353 in nanosleep () from /lib/x86_64-linux-gnu/libc.so.6 #2 0x000056220b40ecb4 in pg_usleep (microsec=microsec@entry=10000) at ../src/port/pgsleep.c:50 #3 0x000056220afb43c3 in CreateCheckPoint (flags=flags@entry=108) at ../src/backend/access/transam/xlog.c:7098 #4 0x000056220b1c6e86 in CheckpointerMain (startup_data=<optimized out>, startup_data_len=<optimized out>) at ../src/backend/postmaster/checkpointer.c:464 To fix, add AbsorbSyncRequests() to the loops where the checkpointer waits for DELAY_CHKPT_START or DELAY_CHKPT_COMPLETE operations to finish. Backpatch to v14. Before that, SLRU deletion didn't call RegisterSyncRequest, which avoided this failure. I'm not sure if there are other similar scenarios on older versions, but we haven't had any such reports. Discussion: https://www.postgresql.org/message-id/[email protected]
wjzeng
pushed a commit
that referenced
this pull request
Jul 9, 2024
Commit e7cb7ee, which introduced the infrastructure for FDWs and custom scan providers to replace joins with scans, failed to add support handling of pseudoconstant quals assigned to replaced joins in createplan.c, leading to an incorrect plan without a gating Result node when postgres_fdw replaced a join with such a qual. To fix, we could add the support by 1) modifying the ForeignPath and CustomPath structs to store the list of RestrictInfo nodes to apply to the join, as in JoinPaths, if they represent foreign and custom scans replacing a join with a scan, and by 2) modifying create_scan_plan() in createplan.c to use that list in that case, instead of the baserestrictinfo list, to get pseudoconstant quals assigned to the join; but #1 would cause an ABI break. So fix by modifying the infrastructure to just disallow replacing joins with such quals. Back-patch to all supported branches. Reported by Nishant Sharma. Patch by me, reviewed by Nishant Sharma and Richard Guo. Discussion: https://postgr.es/m/CADrsxdbcN1vejBaf8a%2BQhrZY5PXL-04mCd4GDu6qm6FigDZd6Q%40mail.gmail.com
wjzeng
pushed a commit
that referenced
this pull request
Jul 9, 2024
Commit e7cb7ee, which introduced the infrastructure for FDWs and custom scan providers to replace joins with scans, failed to add support handling of pseudoconstant quals assigned to replaced joins in createplan.c, leading to an incorrect plan without a gating Result node when postgres_fdw replaced a join with such a qual. To fix, we could add the support by 1) modifying the ForeignPath and CustomPath structs to store the list of RestrictInfo nodes to apply to the join, as in JoinPaths, if they represent foreign and custom scans replacing a join with a scan, and by 2) modifying create_scan_plan() in createplan.c to use that list in that case, instead of the baserestrictinfo list, to get pseudoconstant quals assigned to the join; but #1 would cause an ABI break. So fix by modifying the infrastructure to just disallow replacing joins with such quals. Back-patch to all supported branches. Reported by Nishant Sharma. Patch by me, reviewed by Nishant Sharma and Richard Guo. Discussion: https://postgr.es/m/CADrsxdbcN1vejBaf8a%2BQhrZY5PXL-04mCd4GDu6qm6FigDZd6Q%40mail.gmail.com
wjzeng
pushed a commit
that referenced
this pull request
Mar 2, 2025
Commit e7cb7ee, which introduced the infrastructure for FDWs and custom scan providers to replace joins with scans, failed to add support handling of pseudoconstant quals assigned to replaced joins in createplan.c, leading to an incorrect plan without a gating Result node when postgres_fdw replaced a join with such a qual. To fix, we could add the support by 1) modifying the ForeignPath and CustomPath structs to store the list of RestrictInfo nodes to apply to the join, as in JoinPaths, if they represent foreign and custom scans replacing a join with a scan, and by 2) modifying create_scan_plan() in createplan.c to use that list in that case, instead of the baserestrictinfo list, to get pseudoconstant quals assigned to the join; but #1 would cause an ABI break. So fix by modifying the infrastructure to just disallow replacing joins with such quals. Back-patch to all supported branches. Reported by Nishant Sharma. Patch by me, reviewed by Nishant Sharma and Richard Guo. Discussion: https://postgr.es/m/CADrsxdbcN1vejBaf8a%2BQhrZY5PXL-04mCd4GDu6qm6FigDZd6Q%40mail.gmail.com
wjzeng
pushed a commit
that referenced
this pull request
Aug 15, 2025
There've been a few complaints that it can be overly difficult to figure out why the planner picked a Memoize plan. To help address that, here we adjust the EXPLAIN output to display the following additional details: 1) The estimated number of cache entries that can be stored at once 2) The estimated number of unique lookup keys that we expect to see 3) The number of lookups we expect 4) The estimated hit ratio Technically #4 can be calculated using #1, #2 and #3, but it's not a particularly obvious calculation, so we opt to display it explicitly. The original patch by Lukas Fittl only displayed the hit ratio, but there was a fear that might lead to more questions about how that was calculated. The idea with displaying all 4 is to be transparent which may allow queries to be tuned more easily. For example, if #2 isn't correct then maybe extended statistics or a manual n_distinct estimate can be used to help fix poor plan choices. Author: Ilia Evdokimov <[email protected]> Author: Lukas Fittl <[email protected]> Reviewed-by: David Rowley <[email protected]> Reviewed-by: Andrei Lepikhov <[email protected]> Reviewed-by: Robert Haas <[email protected]> Discussion: https://postgr.es/m/CAP53Pky29GWAVVk3oBgKBDqhND0BRBN6yTPeguV_qSivFL5N_g%40mail.gmail.com
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
No description provided.