Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

vmware logs stops sc4s #2591

Closed
liorbubynet opened this issue Sep 19, 2024 · 3 comments
Closed

vmware logs stops sc4s #2591

liorbubynet opened this issue Sep 19, 2024 · 3 comments

Comments

@liorbubynet
Copy link

Was the issue replicated by support?

What is the sc4s version ?
30.26.1
Which operating system (including its version) are you using for hosting SC4S?
RHEL 9.3
Which runtime (Docker, Podman, Docker Swarm, BYOE, MicroK8s) are you using for SC4S?
podman
Is there a pcap available? If so, would you prefer to attach it to this issue or send it to Splunk support?
no
Is the issue related to the environment of the customer or Software related issue?
wi dont know
Is it related to Data loss, please explain ?
currently the vmware logs are not forwarded to splunk
Protocol? Hardware specs?

Last chance index/Fallback index?

Is the issue related to local customization?
no
Do we have all the default indexes created?
yes
Describe the bug
we recently connected our sc4s to vmware.
the sc4s logs reported everything is fine, but the syslog data from the sc4s stoped being received by our splunk cloud.
we noticed during that our syslog-ng-0000*.qf's are filling up
our connection to splunk cloud is through an edge processor

To Reproduce
please contact me and i will show you on our systems

@rjha-splunk
Copy link
Collaborator

@liorbubynet looks like the EP endpoint is intermittently available to SC4S , the data buffer files are only getting full if SC4S endpoints are not available, i will request you to create a support ticket as well @ikheifets-splunk Can you please check this once as well

@liorbubynet
Copy link
Author

@rjha-splunk
it is our belief there aren't an gaps in availability between the 2 servers.
as they are continue to operate fine right now with q sizes between 56- 30 MB, once vmwere is added q size spike to 1 GB in a matter of minutes . regarding networks we are looking at 50 mb without vmwere with it should be around 150 mb based on the traffic that is incoming.
all around those are low number compare to the 10gb connection between them.
splunk case 3573453

@ikheifets-splunk
Copy link
Contributor

splunk case 3573453

I checked that support actively working on this support case, we wouldn't duplicate their work

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants