You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Logstash version (e.g. bin/logstash --version)
Using bundled JDK: /usr/share/logstash/jdk
logstash 8.5.1
Logstash installation source (e.g. built from source, with a package manager: DEB/RPM, expanded from tar or zip archive, docker)
RPM via yum
How is Logstash being run (e.g. as a service/service manager: systemd, upstart, etc. Via command line, docker/kubernetes)
systemd
How was the Logstash Plugin installed
Via RPM
JVM (e.g. java -version):
If the affected version of Logstash is 7.9 (or earlier), or if it is NOT using the bundled JDK or using the 'no-jdk' version in 7.10 (or higher), please provide the following information:
JVM version (java -version)
JVM installation source (e.g. from the Operating System's package manager, from source, etc).
Value of the JAVA_HOME environment variable if set.
OS version (uname -a if on a Unix-like system):
Description of the problem including expected versus actual behavior:
S3 input causes a complete failure of logstash. Each S3 bucket is configured in a different pipeline, there are thousands of files in some of the s3 buckets.
The issue occurs randomly as far as I can tell. When it occurs the below error shows, logstash will never recover, and the only solution is to restart the service. The error will repeat over and over for each s3 input pipeline however, it is not limited to only an s3 input failure. All other pipelines silently fail with no error and the built in api in logstash do not return data.
pipeline.workers: use in the pipelines.yml seems to limit how often it occurs. However, this is an app breaking issue.
Steps to reproduce:
Please include a minimal but complete recreation of the problem,
including (e.g.) pipeline definition(s), settings, locale, etc. The easier
you make for us to reproduce it, the more likely that somebody will take the
time to look at it.
I do not have a consistent way to reproduce this. The logs do not show anything out of the ordinary even in debug before it occurs.
Logstash information:
Please include the following information:
bin/logstash --version
)Using bundled JDK: /usr/share/logstash/jdk
logstash 8.5.1
RPM via yum
systemd
Via RPM
JVM (e.g.
java -version
):If the affected version of Logstash is 7.9 (or earlier), or if it is NOT using the bundled JDK or using the 'no-jdk' version in 7.10 (or higher), please provide the following information:
java -version
)JAVA_HOME
environment variable if set.OS version (
uname -a
if on a Unix-like system):Description of the problem including expected versus actual behavior:
S3 input causes a complete failure of logstash. Each S3 bucket is configured in a different pipeline, there are thousands of files in some of the s3 buckets.
The issue occurs randomly as far as I can tell. When it occurs the below error shows, logstash will never recover, and the only solution is to restart the service. The error will repeat over and over for each s3 input pipeline however, it is not limited to only an s3 input failure. All other pipelines silently fail with no error and the built in api in logstash do not return data.
pipeline.workers: use in the pipelines.yml seems to limit how often it occurs. However, this is an app breaking issue.
Steps to reproduce:
Please include a minimal but complete recreation of the problem,
including (e.g.) pipeline definition(s), settings, locale, etc. The easier
you make for us to reproduce it, the more likely that somebody will take the
time to look at it.
I do not have a consistent way to reproduce this. The logs do not show anything out of the ordinary even in debug before it occurs.
Example input config:
Provide logs (if relevant):
The text was updated successfully, but these errors were encountered: