-
-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Extended Query: Support Batch Execution #2257
Comments
Isn't it already possible in this case? #1190 (comment) |
While the two approaches look similar, there are very different performance characteristics. Sending a large number of SQL statements with different parameters will perform much worse than sending a single prepared statement and binding many parameters to it. |
yeah a proper "batched query" would be nice. Probably a separate object you pass to
Then the batch query execution could throw if this is false for some validation:
something like that. Then it would be explicit. |
@brianc is this supported? |
from a protocol perspective, yes. But I haven't actually implemented the code yet. |
Hi @brianc , I am interesting contributing since I believe this would be helpful for my usecase in production. |
That'd be cool! I'd suggest making this a separate module like Basically anything passed to |
https://github.com/brianc/node-postgres/blob/master/packages/pg-cursor/index.js#L42 This is a good point of reference. |
It would be great if this feature was implemented with support for multiple queries, rather than multiple value arrays only. const batch = new BatchQuery({
name: 'optional',
queries: [
['INSERT INTO foo (bar) VALUES ($1)', ['first']],
['DELETE FROM foo WHERE id = $1', [1]],
],
}) |
@aleclarson That would be pipelining, not multiple bind. |
for pipelining, see the experiment in #2706 |
Hi!
The Extended Query protocol enables drivers to submit multiple BIND messages before SYNC. One of the big benefits of using Extended Queries is that you can efficiently send a batch of executions without waiting for round trips for each execution. Pairing that with prepared statements and some simplifications: you send a single PARSE, a bunch of BIND/EXECUTE and a SYNC to find out how things went.
In other words, you'd be able to support something like the following without needing 4 entire round trips. (I'm not recommending this API since it would be a terrible breaking change.)
For more information, check out how the JDBC Postgres driver handles a batched execution. There are a few layers to dig through, but this appears to be the core of the code that sends a batch of messages and subsequently sends a single SYNC. NOTE: their driver imposes a limit of 128 records per batch as (apparently) further batching does not improve performance.
The text was updated successfully, but these errors were encountered: