next retry interval at XYZ
which can help you diagnose this issue.retry_max_interval
to the output plugin configuration section to cap the max amount of time to wait until attempting a retry*
where appropriate.fluent-cat
to mimic a message being sent: further reading​flush_interval
is low enough that you are continuously flushing the buffer as you are reading data. For example if you are reading 10,000 events / second make sure you are not flushing data every hour otherwise your buffer can quickly fill upworkers
and flush_thread_count
. if you have excessive messages per second and Fluentd is failing to keep adjusting these two settings will increase Fluentd's resource utilization but hopefully allow the application to keep up with the required throughputtotal_limit_size
file buffers are also much larger (64GB file vs. 512 MB memory)total_limit_size
as well as changing the maximum size of chunks chunk_limit_size
.read_from_head
is truebuffer_path
and has not been flushed