Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Subnetwork and storage command line flags #1548

Open
morph-dev opened this issue Oct 23, 2024 · 5 comments
Open

Subnetwork and storage command line flags #1548

morph-dev opened this issue Oct 23, 2024 · 5 comments
Labels

Comments

@morph-dev
Copy link
Collaborator

Given our recent discussion in a team meeting, I think we should adjust command line flags to reflect our long term direction before we make "v0.1" next week.

I'm going to propose couple of ideas and give my opinion on them, but I encourage everybody to give their opinion, propose alternatives, etc.

One thing that we should keep in mind is that certain dependency between networks should be enforced (in order for content to be validated).
For example, if state is enabled, history has to be as well (and probably soon if history is enabled, beacon has to be enabled as well).
This is not major issue as we can just shut down at startup and print the message.

Type A - Subnetwork and total capacity (different radius per network)

What we have at the moment, but with smarter distribution of storage per network (e.g. if only 100 MB is given, split it 50/50, if 10GB split it 90/10, etc)

Example: --portal_subnetworks=history,state --mb=1000

Pros: Simple user for beginners experience (for beginners)
Cons: Not custom enough for expert users, weird behavior if different subnetworks are enabled between restarts

Type B - Explicit capacity for each network

Each subnetwork has its own storage capacity that is configured separately

Examples (1GB for history, 2GB for storage):

  1. --portal_subnetworks=history:1000,state:2000
  2. --portal_subnetworks=history:1000 --portal_subnetworks=state:2000
  3. --history=1000 --state=2000

Note: We can support examples 1 and/or 2, or alternatively only 3, or if somebody has some other ideas.

Some ideas for making it easier to use (not required for first implementation): instead of explicitly specifying the capacity, one can set "auto" (or not set anything, in which case "auto" is default).
Meaning of "auto" can be discussed separately. Some ideas:

  • 1GB if all capacities are "auto", otherwise average of all non-auto values
  • "auto" is allowed only if all capacities are "auto", in which case it means 1GB per subnetwork

Pros: explicit, fully customizable (good for expert users)
Cons: harder to define default values and make it clear to the user

Type C - Subnetwork and total capacity (unified radius per network)

It seems that we don't want to go with this approach, but I added it for completeness.

Similar to "Type A", but capacity per subnetwork will be dynamic.

All subnetwork will have the same radius, and the amount they store per network will be byproduct of that and having fixed total capacity.

Pros: simple user experience, user's storage distribution should be the same as total storage capacity of each subnetwork
Cons: weird behavior if users enables/disables subnetwork between restarts, can cause weird behavior on global scale (content from one network can disappear because another network gossiped a lot of new content)


I'm strongly in favor of "Type B". Out of alternative way to achieve it, I'm leaning the most towards 3rd example (--history=1000 --state=2000).

@KolbyML
Copy link
Member

KolbyML commented Oct 23, 2024

I think we should and can easily support A and B. They aren't mutually exclusive. I have said this about our discussions on the matter a few months ago. Having one doesn't mean we can have the other. I think it truly makes sense to have both

For A it would work like

--storage.limit=1000 which will auto do the splits between different networks depending on a a pre-calculated allocations table.

For B it would work like

--storage.limit=1000 --storage.history=50 where the 50 would indicate 50% of the total storage limit, if you don't specify certain active networks it will figure out a split based off how much storage is left.

Also if the user's split doesn't make sense for example someone running beacon,history the left overs would be allocated to the priority network. So lets say beacon will only take 2% max, we would give the 48% back to history.

Network Priorities

if there is left over storage it will be given in terms of priorities and according to the pre-calculated allocations table

  • 1: state
  • 2: history
  • 3: beacon (apparently beacon network plans to take storage to storage bootstraps, and possibly start using radius)

Networks will have max and min allocations depending on requirements

  • history should have min:100mb no max
  • state should have min:Xmb no max
    • beacon min Ymb max Zmb

Error case

if someone specifies a subnetwork which isn't active an error is thrown

I don't think we should hijack the --portal_subnetworks param for allocating storage

@pipermerriam
Copy link
Member

My 2 cents.

Two sets of mutually exclusive flags:

  • Flags for setting a unified global storage limit
    • --storage.total=1000
      • Allocates a total of 1000MB of storage.
      • Internally, this is divided intelligently between the running networks.
      • Networks do not contend for the same storage at runtime.
  • Flags for setting individual network storage limits
    • --storage.state=2000 --storage.history=1000 --storage.beacon=200
    • Allocates total storage for each individual network. Any network that is not explicitly configured would receive the default allocation.

For good UX, the client probably should:

  • Error or Warn if a storage limit is configured for a network that is not enabled.
  • Present the user with a confirmation if the configured storage limit is less that what is found on disk. Deletion of stored data should probably require an explicit confirmation.
  • Notify the user about the at-rest storage cost of any disabled networks. If state network is disabled, but there is 2GB of state data being stored on disk, notify the user during startup with a log message.

@KolbyML
Copy link
Member

KolbyML commented Oct 23, 2024

@pipermerriam I am a fan of Piper's suggestion as it accomplishes (both A & B being supported) what I want in a cleaner way then I suggested for case B.

I also like the --storage.* pattern which is included in both of our suggestions. I find it to be a cleaner pattern, and is a pattern used in most EL's which is good.

@carver
Copy link
Collaborator

carver commented Oct 24, 2024

Flags for setting a unified global storage limit

  • --storage.total=1000

    • Allocates a total of 1000MB of storage.
    • Internally, this is divided intelligently between the running networks.
    • Networks do not contend for the same storage at runtime.

Yeah, I like this at first glance. One downside is that it does give us a lot of homework, to define what intelligent means up front, and to keep re-defining it as data on the network shifts over time.

The size of that homework may multiply depending on the answer to: do we have to handle the auto-allocation cases across all possible selections of subnetworks? I'm going to pitch: no.

It would work like this: the only way you can use the simple storage.total is if you join every network (which should be the default as the networks become stable). Otherwise, you turn on the networks individually and set the storage at the same time. That leads us back to something like Milos's Type B for everything but the simplest case, which I'm happy with.

@KolbyML
Copy link
Member

KolbyML commented Oct 24, 2024

Yeah, I like this at first glance. One downside is that it does give us a lot of homework, to define what intelligent means up front, and to keep re-defining it as data on the network shifts over time.

I think we could just graph the growth rates of state and history and then extrapolate. It isn't like these grow at unknown rates, and there isn't that many combinations to begin with. Also since the gas limit doesn't randomly change and when it does change it is slow.

The size of that homework may multiply depending on the answer to: do we have to handle the auto-allocation cases across all possible selections of subnetworks? I'm going to pitch: no.

There isn't that many combinations because some networks are dependent on others to run so there are only really 3 combinations to worry about

[Beacon, Beacon History, Beacon History State]. Any other combination and we would throw an error. As if it is a combination where the node can't validate canonicalness, it isn't a valid combination

If the table ranges are 100mb-500mb, 500mb-1gb, 1gb-2gb,2gb-5gb,10gb-50gb,50gb-250gb.

3*6=18, so 18 possible combinations maybe we want to add a few more ranges but this isn't that complex.

It would work like this: the only way you can use the simple storage.total is if you join every network (which should be the default as the networks become stable). Otherwise, you turn on the networks individually and set the storage at the same time. That leads us back to something like Milos's Type B for everything but the simplest case, which I'm happy with.

the only way you can use the simple storage.total is if you join every network I don't think this is true as I said above smartly allocating isn't that hard of a problem.

I would argue intelligently isn't that smart, it is just a table which we could generate using some simple statistics

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

4 participants