-
Notifications
You must be signed in to change notification settings - Fork 59
Description
Hi Team,
We are using logstash 3pp v7.15.2 in our project.When a remote syslog server restarts logstash will take 1-60 minutes to reconnect.
And during this time none of the logs will be "stored", so they will just be lost.After rsyslog and logstash connection is established everything works ok, logs at the time of reconnection are lost but these should be stored in the logstash persistent queue once the rsyslog output is not available.
we tried following scenarios and these are our findings:
Case 1: Deployed elastic search,logstash (configured rsyslog output but not deployed)
stats of rsyslog pipeline :
"outputs" : [ {
"id" : "e6ad9c62172fd69b98c84ca8fada4ab96cb2e1b4bb5bc5248eead15a4b9f4012",
"events" : {
"out" : 0,
"in" : 1,
"duration_in_millis" : 86
},
"name" : "syslog"
} ]
},
"reloads" : {
"last_failure_timestamp" : null,
"successes" : 0,
"failures" : 0,
"last_error" : null,
"last_success_timestamp" : null
},
"queue" : {
"type" : "persisted",
"capacity" : {
"max_unread_events" : 0,
"max_queue_size_in_bytes" : 1073741824,
"queue_size_in_bytes" : 7904,
"page_capacity_in_bytes" : 67108864
},
"events" : 7,
"data" : {
"free_space_in_bytes" : 29855367168,
"storage_type" : "ext4",
"path" : "/opt/logstash/data/queue/syslog"
},
"events_count" : 7,
"queue_size_in_bytes" : 7904,
"max_queue_size_in_bytes" : 1073741824
we are able to see that rsyslog pipeline persistent queue is able to store events when rsyslog is not deployed. And once rsyslog is deployed it is able to receive all the logs.
Case2: Deployed elasticsearch,logstash,rsyslog restarting both elastic search and rsyslog and sending logs
pipeline stats for elastic search :
"name" : "elasticsearch"
}, {
"id" : "07aa8e0b7b6a3b03343369c6241012b28a5381ebbbb638b8ec25904c8e2f947b",
"events" : {
"out" : 0,
"in" : 0,
"duration_in_millis" : 68
},
"name" : "stdout"
} ]
},
"reloads" : {
"last_failure_timestamp" : null,
"successes" : 0,
"failures" : 0,
"last_error" : null,
"last_success_timestamp" : null
},
"queue" : {
"type" : "persisted",
"capacity" : {
"max_unread_events" : 0,
"max_queue_size_in_bytes" : 1073741824,
"queue_size_in_bytes" : 29636,
"page_capacity_in_bytes" : 67108864
},
"events" : 4,
"data" : {
"free_space_in_bytes" : 29845852160,
"storage_type" : "ext4",
"path" : "/opt/logstash/data/queue/elasticsearch"
},
"events_count" : 4,
"queue_size_in_bytes" : 29636,
"max_queue_size_in_bytes" : 1073741824
},
pipeline stats for rsyslog :
"name" : "syslog"
} ]
},
"reloads" : {
"last_failure_timestamp" : null,
"successes" : 0,
"failures" : 0,
"last_error" : null,
"last_success_timestamp" : null
},
"queue" : {
"type" : "persisted",
"capacity" : {
"max_unread_events" : 0,
"max_queue_size_in_bytes" : 1073741824,
"queue_size_in_bytes" : 29636,
"page_capacity_in_bytes" : 67108864
},
"events" : 0,
"data" : {
"free_space_in_bytes" : 29845852160,
"storage_type" : "ext4",
"path" : "/opt/logstash/data/queue/syslog"
},
"events_count" : 0,
"queue_size_in_bytes" : 29636,
"max_queue_size_in_bytes" : 1073741824
},
In this case events are not being stored in the syslog pipeline persistent queue but are stored in elastic search pipeline persistent queue.
And once elastic search and rsyslog are restarted we are able to receive logs(that we are sending during output disconnected) only in elastic search but not in rsyslog.
Could you please share your comments on this. we think that logstash is not detecting rsyslog when it gets restarted and hence logs are not stored in persistent queue and logs are lost.
Please fill in the following details to help us reproduce the bug:
-->
Logstash information:
Please include the following information:
- Logstash version (e.g.
bin/logstash --version) - Logstash installation source (e.g. built from source, with a package manager: DEB/RPM, expanded from tar or zip archive, docker)
- How is Logstash being run (e.g. as a service/service manager: systemd, upstart, etc. Via command line, docker/kubernetes)
- How was the Logstash Plugin installed
JVM (e.g. java -version):
If the affected version of Logstash is 7.9 (or earlier), or if it is NOT using the bundled JDK or using the 'no-jdk' version in 7.10 (or higher), please provide the following information:
- JVM version (
java -version)- jdk 11 - JVM installation source (e.g. from the Operating System's package manager, from source, etc).
- Value of the
JAVA_HOMEenvironment variable if set.-
OS version (uname -a if on a Unix-like system):
Description of the problem including expected versus actual behavior: Described above
Steps to reproduce: Described above
Please include a minimal but complete recreation of the problem,
including (e.g.) pipeline definition(s), settings, locale, etc. The easier
you make for us to reproduce it, the more likely that somebody will take the
time to look at it.
Provide logs (if relevant):