Skip to content
This repository was archived by the owner on May 15, 2024. It is now read-only.

Support more than 5TiB max blob size #202

Open
xinaxu opened this issue Nov 1, 2023 · 2 comments
Open

Support more than 5TiB max blob size #202

xinaxu opened this issue Nov 1, 2023 · 2 comments
Assignees
Labels
enhancement New feature or request P2

Comments

@xinaxu
Copy link
Collaborator

xinaxu commented Nov 1, 2023

Zenko service has hardcoded constants to be as close as AWS S3 as possible. Those constants could be overriden to support more than 5TiB blob size.

We also need to make sure any client side tool, i.e. AWS S3 CLI or AWS S3 SDK supports larger part number and part size.

@xinaxu xinaxu added enhancement New feature or request P2 labels Nov 1, 2023
@elijaharita elijaharita self-assigned this Nov 2, 2023
@elijaharita
Copy link
Contributor

Modifying the constant should be easy enough, but how would we go about making sure this works?

I'd need access to a machine with like 16TiB storage to actually test on a devnet. Otherwise I'd just be guessing whether things are working or not.

Unless I can test on an existing live instance and just stream 5TiB of data from urandom ??

@rvagg
Copy link
Member

rvagg commented Nov 28, 2023

How about we just mock out the backend, that would then set us up for testing other types of complex behaviour. Either we make an /integration/mock so we can have an instance with an alternative backend, or we make a mock SingularityAPI which we can plug in to pretend that there's a real singularity there. A mock singularity might end up being very useful, although a little more work. But for the purpose of testing this, the amount of the API we'd need to implement is fairly minimal.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
enhancement New feature or request P2
Projects
Status: Todo
Development

No branches or pull requests

3 participants