-
Notifications
You must be signed in to change notification settings - Fork 68
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Resync TLS sends the complete file to the destination while few characters are added at the end of the file #1028
Comments
@dheerajjoshim I'm wondering if something else is actually going wrong here. I can see from the 2nd source mover log some errors like this:
Eventually it does complete:
I would interpret this as it's actually only needed to send 131 bytes, but did need to receive 1.7M from the destination in order to do the compare - however, as you've noticed it does take a long time. So perhaps something is happening when rsync is doing its checksums on the destination file in order to figure out what to send. Do you see anything happening to the destination mover pods while this is happening? I'm wondering if they were getting killed or throttled somehow - We only seem to have logs from this time on the destination: |
Hi. Yes. After the first successful sync (Complete file transfer) the destination mover pod got exited successfully. The pod was marked completed. |
And I observed If the destination mover pods are up within the source pod exhausts retry counts then the logs look like below
Mover pod idle state success.log Otherwise after 5 retries the mover pod looks like
|
I'm not sure I understand - If you're starting another replication while the previous destination is still shutting down, then I think we'd expect to see errors connecting. Is there a reason you're starting up the source replications so fast? Normally I'd expect that you use a trigger to either schedule at specific time/dates/intervals (for example every 5 minutes), or manual triggers to sync at a specific time chosen by you. Either way I think we do not expect a full file to be re-sent. The latest logs don't really show your original issue anymore as it doesn't appear to have sent or updated anything. Maybe to separate the issues, you can re-create by scheduling a replication source to run when the destination is already running and capture the logs. |
Yes. The last error logs were just to show the errors sometimes we see in the system. |
Sounds good, @dheerajjoshim do you have any outstanding concerns or should I close this issue? |
Another question related to this topic is an improvement idea to allow in-place updates to the file using rsync using the Is it possible to add in-place update support or accept a PR for the same? |
Thanks @dheerajjoshim , I've created an enhancement issue here: #1044 Closing this bug as I think it's working as designed. |
We use Volsync to send the files from the primary cluster to the secondary cluster. One of our volumes has a large single file around 50G+ in size
We were testing how such a huge file would be handled using VolSync. We initially generated the file during the testing, and the entire file was replicated to the remote site. Then we appended a few characters to the file. The VolSync then replicated the entire file to the remote site rather than sending the modified chunks.
This could be reproduced in the local cluster itself. Use two namespace
nsa
andnsb
to simulate source and destination wherensa
will be source andnsb
will be destinationCreate PVC on two namespaces with 100G
Create a pod in source namespace to attach to the PVC
Login to the pod in the source namespace and generate a large file
Create TLS passkey for
rsyncTLS
in both the source and destination namespacesCreate
ReplicationSource
andReplicationDestination
on source and destination namespacesMonitor the source mover pod and check if the file is completely transferred to the destination PVC
source mover 1st run.log
Afterwards login to the BusyBox pod and append a few lines at the end of the generated file
Monitor the source and destination mover pods and see the entire file is again moved
source mover 2nd run.log
destination mover 2nd run.log
The expectation was that only the last chunk get copied to the destination PVC. Is there a way to enable more verbose logging to the rsync command itself that might indicate why the entire file gets copied to the destination despite adding a few lines at the end of file?
The text was updated successfully, but these errors were encountered: