You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
It seems like the way that ServerTransport is batching the objects and sending to POST /objects/:streamId is not able to handle a large amount of objects, because it yields enough POST requests to that endpoint, that it restarts returning 500 errors.
The solution would be for the code on the sandbox mentioned above to work.
It works (the objects are sent)
It is fast. I am under the impression that due to the 500 errors the retry mechanism is just making the process hang before attempting again.
Technically, I would suggest enhancing the ServerTransport to compress the batch using application/zip instead of application/json, so that it can send more objects per request, meaning less total requests.
Hi, I am a long time reader and first time poster here, and given that, I apologize in advance if this is a super novice question. I have been doing a lot of work lately with the objectsender and objectloader modules. While I have not experienced this exact issue, I do wonder some about the complexity of moving large payloads of objects or many related objects as JSON via REST. S3 has great support for Pre Signed URLs (seems MinIO does also) and multi-part uploads. Am I wrong to assume digital ocean and azure would not have similar capability? Is there a method currently within speckle for maybe requesting a Pre Signed URL, uploading a large batch of JSON directly to that URL which is handled by S3 (perhaps as JSONL or GZIP'd JSON) and then triggering a worker to ingest that JSON in batches directly to the database (much the same way that the IFC workers do now)? If not and it is desired, I would be more than happy to work on that. Again, sorry for the noob question. Thanks! @iainsproat@vwnd
It seems like the way that
ServerTransport
is batching the objects and sending to POST/objects/:streamId
is not able to handle a large amount of objects, because it yields enough POST requests to that endpoint, that it restarts returning 500 errors.@iainsproat mention it can be a firewall or other mechanism.
Problem
I have described the problem here.
Also, I have produced a codesandbox to reproduce the issue here.
Solution
The solution would be for the code on the sandbox mentioned above to work.
Technically, I would suggest enhancing the
ServerTransport
to compress the batch usingapplication/zip
instead ofapplication/json
, so that it can send more objects per request, meaning less total requests.Additional context
Please have a look at the following discussion: https://speckle.community/t/is-there-a-rate-limit-on-post-objects-projectid/12310
The text was updated successfully, but these errors were encountered: