Sequence Number Behavior #5
Replies: 1 comment
-
|
So when I first read an explanation of the algorithm, I also found it equally confusing. I tried to provide an example of the algorithm in action in hopes that it would be easier to understand the logic, but yeah the explanation could use some better wording and clarification. I think that, ultimately, the example I provide in the video on the README is much easier to follow, so perhaps I can also supplement the article with some images. For the time being, I'll answer your doubts here and make changes to the wiki accordingly.
Yes.
The server reads the Local Sequence Number (LSN) in the packet and compares it to its own Remote Sequence Number (RSN)
Correct
This needs clarification on my part. I decided to initialize the server's LSN at 100 to illustrate that when a client connects, the LSN in the server might not necessarily be at zero (for instance if the connection was lost in the client's side and the UDP handler had to be restarted). I should've given this context first, but to be honest this is such a rare scenario that it might not be worth bringing up.
The server starts with LSN at 100, sends it to the client, which then compares it to their RSN. Since the RSN in the client (0) is smaller than the LSN in the packet (100), the client sets their RSN to 100.
Any time the server sends a new packet, it increments its LSN by 1. I emphasize a new packet because you might need to resend a packet if its acknowledgement is missing. In this case you wouldn't increment the LSN, since you'd re-use the values already in the packet.
Every packet is always acknowledging the ones it receives, unless the unreliable flag is set. Every packet contains the data that you want to send (for example, player coordinates) plus the acknowledgement data (LSN, RSN, and bitmask).
Correct
The way I implemented it in code is that any time a newly received LSN is significantly smaller than the current RSN, then it is assumed to actually be a larger value. What "significantly smaller" means is implementation dependent I guess, but in my case I am using half of So if client has RSN 65355 and it receives a packet with LSN 0, 1, 2, 3, ... all the way to 32667 (65355 divided by 2), it assumes that these values are larger than its RSN. One could use a much smaller value, like 300, because it's very unlikely that you would have RSN 65355 and then suddenly drop 30,000 packets without the connection dying. This was just how I chose to implement it. Your explanation afterwards is spot on.
Yeah so, the idea behind the ack bitfield is to compress the list of all previously received packets so that the other side can spot which packets haven't been acknowledged. Again, I think the video makes this explanation a lot easier to understand.
So, the idea is that the RSN represents the largest last received LSN from a packet, and the ack bitfield tells you all the previously received packets (up to 16 packets). So, for instance, if you receive a packet with RSN of 18, and an ack bitfield of
The example was using 8-bits just for legibility purposes, but the library uses a 16-bit bitfield.
Yes, unless the The whole sequence number logic works because in the C/S model, both sides of the connection are constantly sending information to one another, so you might as well leverage that to send acknowledgement data along with whatever other data you want to send. But sometimes, this communication can be one sided (for instance, if a client is downloading something from the server, the server is sending a lot of information to the client but the client doesn't really have anything to send back). In this case, the server tags the packets with an When the connection exists, heartbeat packets are sent periodically to make sure the connection is still alive. But remember, we can only send 17 packets before the bitfield overflows. Heartbeat packets might not be sent fast enough to cover how many packets are sent between them, so flagging the packets with |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
I've read and reread the wiki section on sequence numbers several times, yet I'm still having trouble decyphering the exact behavior of these numbers. While an example is given, it doesn't sufficiently explain the logic behind how these numbers change and how they are to be used by either end.
I've written up some clarification questions on what's included in the docs already. However, ultimately what I'd really like to see is a more formal list of rules. (So it may be more efficient to do the latter than to answer each individual question.)
The remaining steps of the example just feel like repeats of the above. What criteria are the server and client waiting for to indicate they can stop the cycle of acknowledging each other? Or does this example imply that these packets are not acknowledgements at all, but simply apply to all packets with no expectations of what will be received?
If my guesses from above are correct, sequence numbers operate as such:
* The documentation rightly cautions that care must be taken regarding overflows and comparisons around the boundaries of a sequence number. But it doesn't explain the expected logic. Since UDP packets can arrive out of order, it's not safe to assume that any time the 2nd number is less than the 1st, that the 2nd must be newer.
For the sake of reliable communication, especially between the master server and other implementations, how this is handled should be consistent across all clients and servers.
The simplest method I can come up with would be to say:
The equation can probably be rearranged, but the point is that if s2 is less than s1, AND the forward change from s1 to s2 is less than half the max size of sequence numbers, s2 must be newer.
In my mind, the
+ 2^16part of the equation undos any possible overflow that may have happened. Then if the distance from s1 to s2 is smaller half half the maximum, we consider s2 to be newer. We could use some other value instead of 2^15 too if we wanted, but it seems like a reasonable cutoff point. Even if a client/server was sending 300 packets per second, a packet would have to be delayed by ~1.8 minutes to be incorrectly judged as new.The section about ack bitfields doesn't make sense at all. I sense that that is relevant to understanding how clients/servers actually make use of sequence numbers, but it's difficult to guess at how the logic works. The example isn't sufficient for recognizing patterns.
Some questions include:
This part may need to be a separate discussion altogether, and similarly it may be better to answer with a clear list of rules or steps that apply to all situations rather than just one example.
Beta Was this translation helpful? Give feedback.
All reactions