Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

state-bridge: Census peer scoring #1501

Open
morph-dev opened this issue Oct 1, 2024 · 6 comments
Open

state-bridge: Census peer scoring #1501

morph-dev opened this issue Oct 1, 2024 · 6 comments
Assignees
Labels
enhancement New feature or request portal-bridge Portal bridge

Comments

@morph-dev
Copy link
Collaborator

Problem

The Census only keeps track of all available peers, and their liveness is checked once per hour.

When peers are requested for a given content id, census filters out all peers whose radius doesn't include provided content, and returns first X (4 at the moment of writing).

Ideally, we should have some reputation or scoring system for the peers and use it when selecting them.

High level idea

Firstly, we should still filter out peers that are not interested in the content (as we do now). Then, each peer will be assigned weight, and X peers will be selected randomly based on their weight.

The weight can be calculated as product of following: distance, reputation and recency weights

Note: All constants can be configurable parameters. Given values are picked by feeling and could be a good starting point (unless somebody has some other suggestions).

Distance weight

This is the main weigh: peers should be selected based on their distance to the content.

distance_weight = u16::MAX - distance(peer_id, content_id).as_u16()

The as_u16 function takes 16 most significant bits (2 bytes) of the U256 distance metric

Reputation weight

This is the main penalty weight: peers with recent failed attempts should be penalized.

reputation_weight = 1 / (2^recent_failed_attempts) as f64

The recent_failed_attempts is the number of recently failed OFFER requests to the peer. Only last 16 OFFER requests should be considered. Rejected offers shouldn't be counted as failures .

Each failed attempt lowers the weight by factor of 2.

Recency weight

This is the main recovery function: it "rewards" peers that weren't offered content in a long time.
Peers that were failing requests in the past but have recovered in the meantime would have hard time being selected, and this weight should help with that.

recency_weight = 2 ^ ((now - last_offer_timestamp).as_seconds() / 300)

For every 5 minutes that we didn't offer content to the peer, their weight increases by factor of 2

Assuming all peers are healthy, this shouldn't play big factor because of the high volume and random distribution of content ids (meaning all peers should be selected before this weight grows too big).

This only starts playing important role for peers that are penalized by reputation_weight, in which case it takes 5 minutes to recover one failed attempt.

Future improvements

Some ideas omitted from the initial approach as I want to start with something simple and add more complexity afterwards if needed.

  • Latency weight - reward/penalize peers based on their latency/bandwidth
  • How to handle peers that seem offline (e.g. failed PING request)? Should we keep pinging them (e.g. with exponential backoff period) until we eventually drop their info.
@morph-dev morph-dev added enhancement New feature or request portal-bridge Portal bridge labels Oct 1, 2024
@ethereum ethereum deleted a comment from Maliksb11 Oct 1, 2024
@pipermerriam
Copy link
Member

I'll propose an alternate framework. Weights are all additive to avoid multipliers, making it much easier to reason about the ranges for different values.

  • The base set of peers is the known peers who's data_radius has them in-range for the content.
  • Peers start with 0 weight.
  • Peers can have a maximum of 200 weight.
  • Peers can have a minimum of -200 weight.
  • Weight is tracked on a rolling 15 minutes time window. Events that move out of this window no longer effect a peer's weight.
  • Failing a transfer deducts 10 from a peer's weight.
  • Succeeding a transfer adds 5 to a peer's weight.

Given the list of interested peers, a weighted random selection should be made from the calculated weights.

The main thing I'd like to augment this with would be data transfer rate which I think requires us to have an actual ongoing measurement of each individual peer's throughput in something like bytes/sec. With that we can award additional weight based on how they do in comparison with the other peers.

@morph-dev
Copy link
Collaborator Author

morph-dev commented Oct 1, 2024

Since you didn't mention it, do you think it's important to prioritize / give higher weigh to nodes that are closer to the content?

  • Peers can have a maximum of 200 weight.
  • Peers can have a minimum of -200 weight.

...
Given the list of interested peers, a weighted random selection should be made from the calculated weights.

Can you elaborate, from probability/algorithm point of view, how negative weights would be used?
To my knowledge, most commonly used sampling uses positive weights for sampling where probability for element i being selected is: $p(i) = {w_i} / {\sum{w_i}}$ .

I guess your idea can be used in the same way as if starting weight is 200 and min-max range is 0-400, but I'm not sure if probability distribution you had in mind is different.


I'm also pretty sure we can implement multiple different "peer scoring" algorithms and compare performance (e.g. percentage of failed attempts).

@pipermerriam
Copy link
Member

For the range, I imagined that implementations would treat this as a weight in the range 0-400. It felt awkward to define the algorithm with each node have a starting weight of 200. Also worth saying that the exact numbers here are utterly arbitrary and I'm totally open to suggestions for better numbers.

Since you didn't mention it, do you think it's important to prioritize / give higher weigh to nodes that are closer to the content?

No, I don't think distance should be apart of this. The neighborhood gossip should take care of getting content to the closest nodes and I don't think it is likely to improve our primary performance metric (rate-or-speed of gossip) to push it to the closest nodes, nor do I think it improves network health.

@pipermerriam
Copy link
Member

If we were able to have throughput measurements, then I would suggest this modification.

  • Let MBR be the average/mean bit rate for recent transfers for all successful transfers.
  • After a successful transfer, in addition to the bonus +5 weight for the success..
    • if the bitrate was above the MBR the peer gets an extra 1 weight
    • if the bitrate was below the MBR the peer gets deducted 1 weight

I also think that we should add extra penalties for failed transfers. In theory we could do this by measuring wasted bandwidth and time. The MBR tells us how long a transfer should take on average. We should be able to combine this with the payload size to get a time T which would be the expected mean transfer time. Ideally we'd want to deduct additional weight for them taking extra long to fail and we'd want to deduct extra points for them using extra bandwidth beyond the base bandwidth that would have been needed for transfering the base payload.... I hope this makes sense.

@pipermerriam
Copy link
Member

Before you implement this in trin, can you make sure that we have measurable metrics for knowing whether the change improved the throughput of the bridge...

@KolbyML
Copy link
Member

KolbyML commented Oct 14, 2024

One thing to note is currently we filter out ultralight nodes, peer scoring mainly helps with peers what don't peer form well. As our networks grow bad peers will grow.

Currently, filtering out Ultralight made a big different; I think peer scoring is the right way to do this, as we shouldn't discriminate a node based on implementation but on performance. Bad nodes blocks throughput we can be using for working nodes.

In a perfect world something like this wouldn't make a difference. My assumption is as the networks grow the importance of peer scoring grows

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request portal-bridge Portal bridge
Projects
None yet
Development

No branches or pull requests

3 participants