Currently, I think we just assume that we get all the packets. This means that if some packets are lost (which we have occasionally seen; we aren't sure about the root cause but it seem to be fixed by rebooting an X engine) we effectively have a flux scale change in the data that can be freq/time dependent. If we knew how many packets went into an integration, we would be able to back out what happened, track X engine health, and potentially restore a consistent flux scale by dividing out by the packet count.