-
Notifications
You must be signed in to change notification settings - Fork 52
fix broken gzip file produced sometimes #82
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
1cdfb49
to
4409544
Compare
Hi @jsvd , @colinsurprenant - can you please review this PR? Thanks. |
Hi, any word on this? CLA is signed btw. |
Hey there! I recently stumbled across this issue. We have a relatively busy logstash server and I'm currently going through and calculating the exploded size of all of our outputted gzip files from this plugin, so I'll have some good stats on how frequently we run into it. So far it seems to be about 1%, but very well may happen specifically if Logstash is restarted. Is there anything I can help with to get this merged, tested, and deployed? |
@nit23uec Thanks for your contribution and sorry for the delay in following up.
Given the above I am not sure we actually need the tmp fille writing strategy here and possibly only more-or-less make sure that There is however a deeper problem at play which I also talked about in #79 which is that there is currently no way to safely consume a file produced by the file output while logstash is running because there is no way to know when a file is finished writing to and done with for good. This is a larger issue and is not only related to zipped files and the only way currently to safely consume a file from the file output is by shutting down logstash. Note that this problem might not be relevant for (text) file that are consumed in a tailing/streaming way but this is not applicable to zipped files that cannot be consumed as they are written to. LMKWYT. |
Hello! We've also stumbled upon corrupted gzip outputs. Is there any chance to get this Pull Request Merged? |
Hi @makefu I'll take care of this |
I think that this PR doesn't match the original fix suggestion defined in #73. |
Right now however it seems like file corruption is somewhat worse than a workaround. It seems like for our stuff we will also have to shift to writing files in plain text and using logrotate to gzip the files periodically. |
Closes #79 |
Thanks for contributing to Logstash! If you haven't already signed our CLA, here's a handy link: https://www.elastic.co/contributor-agreement/