Skip to content

Commit 1561be5

Browse files
JelteFdpxcc
andauthored
Fix typos (#968)
Same as #962 but with merge conflicts resolved Co-authored-by: Cheng Chen <[email protected]>
1 parent db9e0f9 commit 1561be5

File tree

17 files changed

+26
-26
lines changed

17 files changed

+26
-26
lines changed

CHANGELOG.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -29,7 +29,7 @@
2929
- Update to DuckDB 1.3.2. ([#754], [#858])
3030
- Change the way MotherDuck is configured. It's not done anymore through the Postgres configuration file. Instead, you should now enable MotherDuck using `CALL duckdb.enable_motherduck(...)` or equivalent `CREATE SERVER` and `CREATE USER MAPPING` commands. ([#668])
3131
- Change the way secrets are added to DuckDB. You'll need to recreate your secrets using the new method `duckdb.create_simple_secret` or `duckdb.create_azure_secret` functions. Internally secrets are now stored `SERVER` and `USER MAPPING` for the `duckdb` foreign data wrapper. ([#697])
32-
- Disallow DuckDB execution inside functions by default. This feature can cause crashes in rare cases and is intended to be re-enabled in a future release. For now you can use `duckdb.unsafe_allow_execution_inside_function` to allow functions anyway. ([#764], [#884])
32+
- Disallow DuckDB execution inside functions by default. This feature can cause crashes in rare cases and is intended to be re-enabled in a future release. For now you can use `duckdb.unsafe_allow_execution_inside_functions` to allow functions anyway. ([#764], [#884])
3333
- Don't convert Postgres NUMERICs with a precision that's unsupported in DuckDB to double by default. Instead it will throw an error. If you want the lossy conversion to DOUBLE to happen, you can enable `duckdb.convert_unsupported_numeric_to_double`. ([#795])
3434
- Remove custom HTTP caching logic. ([#644])
3535
- When creating a table in a `ddb$` schema that table now uses the `duckdb` table access method by default. ([#650])

docs/settings.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -57,7 +57,7 @@ Determines whether community extensions can be installed.
5757
- **Default**: `false`
5858
- **Access**: Superuser-only
5959

60-
### `duckdb.unsafe_allow_execution_inside_function`
60+
### `duckdb.unsafe_allow_execution_inside_functions`
6161

6262
Allows DuckDB execution inside PostgreSQL functions. This feature can cause crashes in rare cases and is disabled by default. Use with caution.
6363

docs/types.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -121,7 +121,7 @@ SELECT * FROM mycte WHERE company = 'DuckDB Labs';
121121
-- ERROR: 42703: column "company" does not exist
122122
-- LINE 5: SELECT * FROM mycte WHERE company = 'DuckDB Labs';
123123
-- ^
124-
-- HINT: If you use DuckDB functions like read_parquet, you need to use the r['colname'] syntax to use columns. If you're already doing that, maybe you forgot to to give the function the r alias.
124+
-- HINT: If you use DuckDB functions like read_parquet, you need to use the r['colname'] syntax to use columns. If you're already doing that, maybe you forgot to give the function the r alias.
125125
```
126126

127127
This is easy to work around by using the `r['colname']` syntax like so:

include/pgduckdb/utility/cpp_wrapper.hpp

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -53,7 +53,7 @@ __CPPFunctionGuard__(const char *func_name, const char *file_name, int line, Fun
5353
// }
5454

5555
// In this case the `PG_CATCH` block that will handle the error thrown below
56-
// would try to reset the stack to the begining of `my_func` and crash
56+
// would try to reset the stack to the beginning of `my_func` and crash
5757
//
5858
// So instead this should also be wrapped in a `InvokeCPPFunc` like:
5959
//

include/pgduckdb/vendor/pg_numeric_c.hpp

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -19,7 +19,7 @@
1919
*-------------------------------------------------------------------------
2020
*/
2121

22-
// Removed all un-neccessary includes
22+
// Removed all un-necessary includes
2323
#include "postgres.h"
2424
#include "utils/numeric.h"
2525
#include "lib/hyperloglog.h"

scripts/tpch/README.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@ easiest way to do this is using docker:
77
# Simply run this to enable motherduck:
88
docker run --rm -e POSTGRES_HOST_AUTH_METHOD=trust --network=host -d --name pgduck -e MOTHERDUCK_TOKEN \
99
pgduckdb/pgduckdb:18-main
10-
# For real benchmarks it's recommended to configure Postgres its its settings,
10+
# For real benchmarks it's recommended to configure Postgres its settings,
1111
# as well as docker its --shm-size to be a good match for your machine. For the
1212
# best results this obviously requires tuning.
1313
# A decent starting point for an AWS c6a.8xlarge (32 vCPU, 64GB RAM) instance
@@ -35,7 +35,7 @@ curl -LsSf https://astral.sh/uv/install.sh | sh
3535
After that you can use `./run.py` (see `run.py --help` for details) to run a
3636
TCPH-like benchmark. Check `./run.py --help` for details on the arguments. A
3737
simple example that compares the DuckDB engine and the Postgres engine on an
38-
extremely tiny dataset dataset (for real performance comparisons real use scale
38+
extremely tiny dataset (for real performance comparisons real use scale
3939
factors of 1 or higher):
4040

4141
```bash

sql/pg_duckdb--1.0.0.sql

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
LOAD 'pg_duckdb';
22

3-
-- We create a duckdb schema to store most of our things. We explicitely
3+
-- We create a duckdb schema to store most of our things. We explicitly
44
-- don't use CREATE IF EXISTS or the schema key in the control file, so we know
55
-- for sure that the extension will own the schema and thus non superusers
66
-- cannot put random things in it, so we can assume it's safe. A few functions

src/pg/pgduckdb_subscript.cpp

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -52,13 +52,13 @@ CoerceSubscriptToText(struct ParseState *pstate, A_Indices *subscript, const cha
5252
}
5353

5454
/*
55-
* In Postgres all index operations in a row ar all slices or all plain
55+
* In Postgres all index operations in a row are all slices or all plain
5656
* index operations. If you mix them, all are converted to slices.
5757
* There's no difference in representation possible between
58-
* "col[1:2][1]" and "col[1:2][1:]". If you want this seperation you
59-
* need to use parenthesis to seperate: "(col[1:2])[1]"
58+
* "col[1:2][1]" and "col[1:2][1:]". If you want this separation you
59+
* need to use parenthesis to separate: "(col[1:2])[1]"
6060
* This might seem like fairly strange behaviour, but Postgres uses
61-
* this to be able to slice in multi-dimensional arrays and thtis
61+
* this to be able to slice in multi-dimensional arrays and this
6262
* behaviour is documented here:
6363
* https://www.postgresql.org/docs/current/arrays.html#ARRAYS-ACCESSING
6464
*

src/pgduckdb_background_worker.cpp

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -984,7 +984,7 @@ DropRelation(const char *fully_qualified_table, char relation_kind, bool drop_wi
984984
return false;
985985
}
986986
/*
987-
* We explicitely don't call SPI_commit_that_works_in_background_worker
987+
* We explicitly don't call SPI_commit_that_works_in_background_worker
988988
* here, because that makes transactional considerations easier. And when
989989
* deleting tables, it doesn't matter how long we keep locks on them,
990990
* because they are already deleted upstream so there can be no queries on

src/pgduckdb_ddl.cpp

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1117,8 +1117,8 @@ DuckdbShouldCallDDLHooks(PlannedStmt *pstmt) {
11171117
*/
11181118
if (IsA(parsetree, TransactionStmt)) {
11191119
/*
1120-
* We could explicitely only check for BEGIN, but any others won't be
1121-
* the first query in a session anyway (so initialzing caching doesn't
1120+
* We could explicitly only check for BEGIN, but any others won't be
1121+
* the first query in a session anyway (so initializing caching doesn't
11221122
* matter).
11231123
*
11241124
* Secondly, we also don't want to do anything for transaction
@@ -1455,7 +1455,7 @@ DECLARE_PG_FUNCTION(duckdb_drop_trigger) {
14551455
* 1. Each table owns two types:
14561456
* a. the composite type matching its columns
14571457
* b. the array of that composite type
1458-
* 2. There can also be many implicitely connected things to a table, like sequences/constraints/etc
1458+
* 2. There can also be many implicitly connected things to a table, like sequences/constraints/etc
14591459
*
14601460
* So here we try to count all the objects that are not connected to a
14611461
* table. Sadly at this stage the objects are already deleted, so there's

0 commit comments

Comments
 (0)